10 Lessons from the First Decisions of Facebook’s Oversight Board

Matthew Schafer
10 min readFeb 6, 2021

--

Last month, Facebook’s independent Oversight Board, created to adjudicate disputes over takedowns by Facebook, issued its first five substantive opinions:

These are, of course, just the beginning of the Board’s “jurisprudence,” but because these cases do have precedential value (indeed, they are considered “highly persuasive”), it’s worth picking out some of their common threads.

While several are outlined below, the top line is this: The Board will be extremely protective of speech. In fact, it seems likely the takedowns will be limited to cases where there is some kind of imminent harm if the content remains. (The Trump ban, while likely to be upheld at the time of the insurrection, is almost assuredly going to be reversed to the extent the ban is forward looking.)

As a result, it seems unlikely that the Board will develop a jurisprudence that addresses this moment where misinformation runs rampant on social media platforms and can manifest harms in the real world. And, that is, unfortunately, a missed opportunity to think about speech in a new way.

1. A Presumption Against Takedowns

The Oversight Board reversed Facebook’s takedowns in four out of five cases. In each case, the Board referenced the importance of a user’s “Voice” as described in Facebook’s Values, finding it to be the most important Value. It did so based on Facebook’s own explanation that a “Our commitment to giving people voice remains paramount.” (The five Values are: Voice, Authenticity, Safety, Privacy, and Dignity.) As the Board explained in the Myanmar Case, “Facebook takes ‘Voice’ as a paramount value.” Similar statements were made in the other cases, as well.

In addition, relying on international human rights standards, the Board has recognized that “the scope of the right to freedom of expression is broad.” This is especially so where, as the Board recognized in the Azerbaijan Case, the speech relates to “political issues” and “historical claims,” irrespective of whether they are “inaccurate or contested and even when they may cause offense.”

As such, the starting point for the Board appears to be that takedowns are presumptively improper. A right to one’s Voice will take precedence over other Values, even where the content at issue is false. Only when those Values are sufficiently weighty to “displace” Voice (a term of art, it appears) will a takedown be proper.

2. The Three Kinds Of Relevant Standards

Each opinion begins with a review of the three relevant standards:

Facebook’s Community Standards “are a guide for what is and isn’t allowed on Facebook.” They include things like Violence and Criminal Behavior, Safety, Objectionable Content, among others.

Facebook’s Values “serve as the basis for” its Community Standards, and are its preamble too. At the same time, the Values “reflect the policies” laid out in the Community Standards.

The Relevant Human Rights Standards (“RHRS”) include non-Facebook guidance, including UN Guiding Principles on Business and Human Rights, the International Covenant on Civil and Political Rights, and International Convention on the Elimination of All Forms of Racial Discrimination.

The Board will also rely on subject matter specific guidance, as it did in the France Case where it relied on the UN Special Rapporteur on freedom of opinion and expression, report on Disease Pandemics and the Freedom of Opinion and Expression.

3. Community Standards vs. Values vs. Relevant Human Rights Standards

While the Board explains the three kinds of standards it uses, it does not explain how the competing interests of those standards interact or how much weight should be given to one kind over another — the one exception being that Voice, a Value, remains the paramount consideration.

For example, if a takedown did not satisfy the RHRS but did satisfy Facebook’s Values and Community Standards, it’s unclear which would take precedence.

In the Myanmar Case, the Board seemed to separately assess each of the three standards, and then make an ultimate (squishy) determination based on an amalgam of the inquiries.

In the Azerbaijan Case, however, the Board seemed to be guided primarily by the RHRS standards, and the ICCPR test for limiting freedom of expression.

It seems the only indication we have as to how these standards affect each other is that a violation of the Community Standards alone does not require a takedown.

In the Myanmar Case, the content at issue (despite the Board’s handwringing) pretty clearly violated the plain language of the Community Standards (charge of intellectual deficiency against adherents of a religion). Nevertheless, in light of the Values and the RHRS, the Board did not okay the takedown.

Perhaps, this means that the takedown must not violate at least two of the three kinds of standards? Or perhaps if there’s an especially odious violation of one of three that will suffice? But mild violations will not do.

In other words, it’s unclear if the qualitative assessments are made within each kind of standard (e.g., how bad something violated a Value). It’s also unclear if it’s a quantitative question as to how many standards are violated before a takedown is proper (e.g., the content violated the Community Standards and the Values, but not the RHRSs)? Or if there’s some qualitative weighing as between each (e.g., a manifest infringement of one standard as weighed against another.)

4. Falsity Alone Is Not Sufficient For A Takedown

Adopting a broad understanding of freedom of speech, the Board appears unwilling to permit takedowns of false information absent some kind of imminent harm.

For example, in the France Case, the Board found that while the content at issue was COVID-19 misinformation, Facebook had “not demonstrated how this user’s post contributed to imminent harm in this case.”

There, the user had posted a video criticizing the French government for refusing “to authorize hydroxychloroquine combined with azithromycin for use against COVID-19,” but authorizing remdesivir. The post, which characterized hydroxychloroquine as “harmless,” was shared in a group with “500,000 members and received about 50,000 views, about 800–900 reactions … and was shared by 500–600 people.”

The Board found that “serious questions remain about how the post would result in imminent harm.” It noted, as well that the case raised “the question of whether an allegedly factually incorrect claim in a broader post criticizing governmental policy should trigger the removal of the entire post,” suggesting that the Board will be especially unwilling to remove false, political speech — a finding that could, potentially, let political misinformation spread far and wide.

To be sure, in the France Case, the Board was holding Facebook to “its own imminent harm standard,” but other cases suggest that immediacy of harm will be central to its decisionmaking. In the Myanmar Case, for example, the Board concluded that the content did not “intentionally incite any form of imminent harm” and thus should not have been taken down. And in the Azerbaijan Case, the Board found that because of an “especially pronounced” likelihood of harm “leading to offline action impacting the right to security of person and potentially life,” a takedown was appropriate.

5. The Prevailing Test Under RHRS

While RHRS are just one of three kinds of relevant standards in the decisions, the Board has endorsed as a guiding principle the well-established test that restrictions on expression should “meet the requirements of (1) legality, (2) legitimate aim, and (3) necessity and proportionality.”

In the Brazil Case, the Board explained that to satisfy the “legality” requirement, Facebook’s rules must be “clear, precise, and publicly accessible.” This requirement, citing guidance on the ICCPR, is meant to “guard[] against arbitrary censorship.”

The legitimate aim requirement demands that restrictions on expression protect other important rights. When it came to slurs, the Board explained, “Facebook’s prohibition . . . seeks to protect people’s rights to equality and non-discrimination, to exercise their freedom of expression on the platform without being harassed or threatened, to protect the right to security of person from foreseeable and intentional injury, and even the right to life.”

Finally, the Board explained, “Necessity and proportionality require Facebook to show that its restriction on freedom of expression was necessary to address the threat, in this case the threat to the rights of others, and that it was not overly broad.”

6. Context Is King

Context will also matter when deciding whether something violates Community Standards, Values, or RHRS. As the Board said in one decision, “Context is key.” And, it’s clear that there are different “kinds of context” that the Board will consider in any given case.

First, as the Board explained in the Myanmar Case, “the post should be read as a whole.” And, as it explained in the Azerbaijan Case, “There may be instances in which words that are demeaning in one context might be more benign, or even empowering, in another.”

Second, the Board will consider the broader “socio-political and cultural context” of the content. In the Azerbaijan Case, for example, the Board recognized that the “conflict between Armenia and Azerbaijan, neighbors in the Southeast Caucasus, is of long standing. . . . The content in question was posted to Facebook shortly before a ceasefire went into effect. This context was especially relevant for the Board.”

For that socio-political and cultural context to matter, however, the Board apparently will require a “fit” between it and the content at issue. In the Myanmar Case, the Board found the social context irrelevant (or at least less relevant than in the Azerbaijan Case), because “there was no indication that statements referring to Muslims as mentally unwell or psychologically unstable are a significant part of [long-standing] anti-Muslim rhetoric in Myanmar.”

7. The Importance of Human Decisionmaking

The Board believes that Facebook is “over-reliant” on automated takedowns leading to “over-enforcement,” while recognizing that such technologies are necessary to detect prohibited content at scale. The reliance on such technologies is especially problematic, the Board believes, where there is no human review of such automated takedowns.

As the Board explained in the Brazil Case, “The Board is concerned that the content was wrongfully removed by an automated enforcement system and potentially without human review or appeal.” Automated technologies, the Board said, cannot fully “understand context and grasp the complexity of human communication for content moderation.”

As a result, the Board appears to have adopted a rule that human review of automated takedowns is required. As the Board explained, “Appeal to human review should be offered” in response to automated takedowns, “allowing enforcement mistakes to be repaired.”

8. Board Broadly Interprets Its Jurisdiction

The Board broadly interprets its jurisdiction, and will decide for itself whether it has jurisdiction. In the Brazil Case, a machine learning classifier that was taught to identify nudity removed an Instagram post because of the presence of a female nipple (male nipples are allowed) in an advertisement promoting breast cancer awareness month. Around two months after the takedown (and after the Board accepted the case), Facebook later restored the advertisement.

At the Board, Facebook argued the case was moot. Specifically, Facebook argued “that, having restored the content, there is no disagreement that it should stay on Instagram” — a requirement for the case to be heard. The Board disagreed for three reasons:

  • First, its jurisdiction extended to any dispute that existed after the internal appeals process within Facebook completed. (In this instance, Facebook only restored the content after the Board accepted the case.)
  • Second, the Board retained jurisdiction as the removal caused “irreversible harm,” as the content was not restored until after conclusion of breast cancer awareness month.
  • Third, under the Bylaws, “Facebook is committed to take action on ‘identical content with parallel context.’” As such, the Board’s decisions “decision[] extends far beyond the content in this case.”
  • Fourth, where automation is concerned, “the content policies are essentially embedded into code and may be considered inseparable from it and self-enforcing.” Human assessment is thus especially important.

9. Board Will Demand Clear Standards From Facebook

The Board will demand clearer standards from Facebook if Facebook is to defend its takedowns. In case after case, the Board suggested that Facebook clarify its Community Standards. For example, in the United States Case, the Board suggested that Facebook “set out a clear and accessible Community Standard on health misinformation, consolidating and clarifying existing rules in one place.”

10. A Lack Of Access

While the Board chastised Facebook in some cases for not providing “clear, precise, and publicly accessible” rules, the Board itself fails to provide sufficient public access. Indeed, the Board does not publish briefing provided by Facebook. And, the five-judge panels assigned to cases are not disclosed. All decisions are unsigned.

Along the same lines, neither the identity of nor the number of dissenters is disclosed. In the Azerbaijan Case, for example, the Board noted that a “minority” found the removal improper. And, of those, there was disagreement as to why it was improper. Yet, we do not know who dissented, who disagreed among them, or how many did so.

Although the Board applies to Facebook principles underlying the ICCPR, it does not apply those principles to itself. General Comment 34 to the ICCPR explains that a right to expression “embraces a right of access to information held by public bodies.”

As the European Court of Human Rights long ago explained, “The holding of court hearings in public constitutes a fundamental principle enshrined in paragraph 1 of Article 6 [fair trial].” Such transparency “protects litigants against the administration of justice in secret with no public scrutiny; it is also one of the means whereby confidence in the courts can be maintained.”

The Board, however, elides this standard by not signing these opinions or disclosing all relevant material before it that might affect its decisions. It should do so if it intends to maintain credibility.

--

--

Matthew Schafer
Matthew Schafer

Written by Matthew Schafer

Media Lawyer. Adjunct Professor/Mass Media Law at Fordham University School of Law. Twitter @MatthewSchafer

No responses yet