This post originally appeared on the Georgia Tech School of Public Policy’s Internet Governance Project blog titled: Facebook’s Oversight Board: A toothless Supreme Court?
A week and a half ago Facebook released its final charter for the Oversight Board it intends to create to tackle its content moderation problems. The charter spells out exactly how it would work, who would be on it and what its responsibilities are. CNET has a good basic description of what the board would do, as do most news outlets.
This post is not meant to explain the details of the board. Who decides who gets to be on the board (initially Facebook and an executive staffing company and then the board members themselves) their paid status (part-time), their level of experience (significant), etc., are also important topics, as each of these decisions can actually have a substantial effect on the board. But details like these pale in comparison to the fundamental question about this oversight board: is the board designed to truly solve the issue or is it a PR move concocted to get political heat off Facebook? To get to the bottom of that, let’s look at what the company means by oversight and the concerns that stem from it.
What does oversight mean?
The concept of oversight is not well defined in private governance (loosely used here to mean non-state governance of a policy area). Oversight has a different meaning in a corporate governance and a nation-state context. In North America, the concept of “oversight” in government could mean either judicial oversight or legislative oversight. While legal scholars may consider judicial oversight to be a robust concept backed by a strong literature, political science scholars also can point to legislative oversight as a concept that has, in recent years, received scrutiny as a critical function of a nation-state (involving both oversight on the executive branch, but also on the judiciary).
Facebook’s choice of a label (Oversight Board) can thus be seen as a nice rhetorical trick. It allows each person to read into the proposal whatever form of oversight they like, based on their predisposition. The charter reads, broadly, as if the board will exercise oversight by adjudicating cases, which can lead to a set of precedents, and thus an interpretation of the content guidelines that moderators can refer to. Additionally, if there is a need, the board may be able to spell out actual policy proposals, either through the cases themselves or through actual policy recommendations which could be accepted or not by Facebook.
Thus, the fundamental assumption of the board, in its current form, is that the solution to Facebook’s content moderation issues comes through an external judiciary-like body that has the power and legitimacy to adjudicate on hard and significant cases, which, if things go well for a year, would be extended to other willing companies.
But some aspects of the proposal raise serious concerns. This post will identify three of the most important ones: first, the judiciary aspect; then, the legitimacy concern; and finally, the notion that this would be a useful board for other companies. My argument is that the Oversight Board will most likely fail because of its improper construction.
A judiciary with no constitution
The idea for this Oversight Board came from a constitutional lawyer, a Harvard professor, who believed that what Facebook, a multinational corporation, needs to perform proper content moderation is a “Supreme Court.” This is a clear case of “when you have a hammer, everything is a nail.” Prof. Feldman, a renowned expert in building constitutions, went for a nation-state-centric view of the world.
It is clear that he was expecting significant push-back from Facebook executives, because he makes it a point to say that the most important part of the charter is that Facebook will obey (within reason) the judicial decisions of the Board. But a better understanding of the general processes within the content moderation cycle can explain why it isn’t actually much of a concession.
A content moderation cycle can be said to have, in its purest form, four distinct but interconnected processes: 1) the creation of the guidelines, 2) the monitoring of the content, 3) decision-making on the content and 4) a remedy mechanism or final say. A lot of the current conversation about content moderation focuses on the monitoring and decision-making parts, and it is very clear why. It’s a seemingly-intractable problem, and the solutions put forth so far, human- or machine-based, both have significant downsides. Human moderation results in underpaid, overworked and scarred-for-life workers. Machine solutions lack context and are irony-illiterate. However, monitoring and decision-making are the two areas where private companies are fully within their rights to act unilaterally. In fact, the platforms are really the only ones in a position to perform those functions. While the ideal solutions for these processes are less than clear, they still have to be initiated by the companies themselves. This leaves the first and last processes: guideline creation and an appeals mechanism.
It is pretty difficult to find an online platform corporation that is genuinely interested in making the tough final choices about whether content stays up or not. These choices are riddled with political land mines. They have the potential to trigger one of the many loud grievance constituencies who are always willing to play victim, and can genuinely hurt groups who are singled out. Very few of the content questions have a “universal” or “objective” right answer. It is, in short, a giant unproductive hassle for these companies. A consummate capitalist, Mark Zuckerberg is willing to make sacrifices to make more money for the company. And for good measure, he’d be even more willing to sacrifice parts of the process that only cause him and the company grief, like decisions on removing content, especially if he knew that could potentially get some legislative heat off of his company. Thus, the willingness of Facebook executives to offload hard choices to the oversight board should be easy to understand.
Even more, setting it up as a “judiciary” when there isn’t a distinct “legislative” branch but simply one monolithic “corporate” (executive) branch misses the target. The creation of a “judiciary” allows executives to shift the discussion into the final outcomes of content moderation, and completely ignore the process of content guideline creation; that is, the “legislative” aspect. Suspending disbelief for a moment, and assuming that the underlying nation-state model is applicable to Facebook, what is more important, having a “Supreme Court” and its decisions, or writing the “constitution,” the fundamental rules that is supposed to guide the Court’s decisions?
The “constitution” in this thought-experiment is the output of the first process of content moderation, the creation of content guidelines. There may not be a clear-cut, definitive answer on how content guidelines should “legislate” on the many thorny issues. In fact we may not even need to have one, as it can be reasoned that the issue is not what the guidelines are per-se, but how those guidelines are debated and created, and the underlying governance ethos and structure that supports them. A governance structure that is inclusive and transparent and can alleviate a significant number of the concerns the public has with the perceived black box of the choices and trade-offs that go into making said guidelines. An independent, diverse and transparent “external board” that would be tasked with rewriting, or providing input into the creation of the guidelines would be a better proposal than one that “adjudicates” on people’s fundamental rights based on an arbitrary and constantly shifting set of standards. Which brings us to the second concern, that of legitimacy.
Legitimacy
For all practical purposes, the board lacks the power to make significant and lasting change where it matters most: in the actual content moderation guidelines. The board is tasked to adjudicate based on the guidelines (and Facebook’s “values”) and, in rare cases, give recommendations that are not in any way binding. Digging a bit into the charter shows that this latter addition to the board’s potential role is mostly window dressing: the part-time aspect, the limited time commitment, the lack of staff dedicated to anything outside of the cases themselves, would mean that board members would think on and concoct changes to the guidelines in their spare time. Even if they were properly staffed and given enough time to do so, it would be a paradox. The “judiciary” would be in charge of making changes to the fundamental rules they use to adjudicate. So, if the true power is in the content guidelines, the lack of power to affect changes in it, or in the processes that yield it, means that there is no true power in the oversight board.
So, a functionally-powerless oversight board can be said to lack legitimacy by design. Why, then, would Facebook build it in such a way? Cynics and critics are saying that the goal is to pass the buck from Facebook to a third party. The company can point to them and say “these independent experts said we’re not biased in our decisions.” If, in fact that cynical interpretation is not warranted, but simply a reflection of how an undertaking of this magnitude is complex and hectic, the current board set up is still going to feed into the cynical narrative.
The “judicial” metaphor has a pretty strong underlying premise, and it signals the uncritical transference of the fundamental expectations of a nation-state to a corporate entity. This is dangerous, because a corporation is not, in any substantial way, similar to a nation-state. There are other mechanisms that govern its relationship to staff and consumers, distinct from the ones that exist in a nation-state’s relationship to its citizens. There is no implied or existing social contract that gives the corporation any kind of powers over its customers. A company has a duty to its shareholders, unlike a nation-state, which has a duty to secure certain basic rights for its citizens.
This means that the power and legitimacy that comes with a judicial branch of a nation state can’t be transferred to the “judicial” branch of a corporation. And here comes the kicker: this hasn’t been done before, so there is no literature or empirical evidence that this would work and achieve legitimacy. Meanwhile, an external board that would be a consultative body tasked primarily with providing guidance on the content guidelines, would have had a significant body of literature on private governance legitimacy (Quack 2010, Fransen 2012, Mena and Pallazo 2012, Hahn and Weidtmann 2016) and empirical examples (Gasser et al 2015) to lean on.
Maybe the blinders of those who saw “constitutional law” as the best perspective on content moderation were too limiting for them to see this (the author can attest to having spoken with team-members about this concern before the final charter was finalized). Or maybe the intent, from Facebook’s executives, was indeed to have it fail.
One of the more bizarre responses to the argument above is that a lot of hard work and a lot of thoughtful consideration has gone into it. This is definitely true. As a participant in a small group of experts giving feedback on an intermediary draft of the charter I saw first-hand the dedication, genuine interest and hard work that a smart and savvy team put in for this process. The endless hours, the global consultations, the hundreds and thousands of pages of summaries and research, and the substantial amount of money poured into this endeavor are very real. The experts engaged to provide feedback spoke their minds and offered up many thoughts that seem to have been taken into consideration, to an extent. Knowing all of that, though, does not override the fact that hard work and thoughtful consideration is not enough to make the board sufficiently legitimate
The worst part about all the resources poured into this project is that Facebook actually did all the work, and truly consulted with more or less the entire world, and gathered an accurate X-ray of the general consensus (or lack of it) on content moderation. If only they had been a bit more flexible about their fundamental assumption, the desire to make the board primarily a “judiciary”, Facebook would have been able to tap into literatures and existing cases of external governance structures, which would actually bring with it a stronger, more robust legitimacy. But that would have required an amenability to demystify and open the black box of content guideline creation, which, based on the final charter of the board, seems to be the one area where Facebook is not actually willing to make concessions.
General Applicability
While the metaphor of a Supreme Court hasn’t fared well so far in this analysis, it truly falls apart when it comes to the problem of general applicability (i.e., Facebook’s desire for other platforms to build on it). While the governance of the board (like the Trust that would fund its activities and other mechanisms) allows it to be an independent institution, its substance and underlying philosophical assumptions still tie it to the company. This means that the board would make decisions for Facebook as a product of the content guidelines that Facebook has in place. Any further company joining would then have to re-train the board on matters of their own content guidelines. It would have to agree to the board’s case decisions, immediately binding, within technical acceptability. A new company would potentially be weary of, and not necessarily have much incentive to join, a third party adjudication structure that has primarily and mostly adjudicated cases based on Facebook’s guidelines, and is created through Facebook’s not necessarily spelled-out definition of oversight.
As I described above, a “final-say judiciary body that creates precedents” draws its institutional legitimacy from the fundamental text it is supposed to adjudicate against. If any new company joins, the metaphor crumbles – unless there is the board multiplies or gains new members that exclusively deal with cases for that company. True, the company could try to franchise the concept of the oversight board, akin to the “exporting democracy” perspective, where a country in transition to democracy adopts wholesale rules and institutions from a more established country. But the final goal of the board is supposedly not to create many boards for each company, but to coalesce the industry into one board, started through Facebook’s immense undertaking. Setting up what is primarily an appellate court rather than a more robust governance structure, makes it harder if not impossible for this to happen. A transition from a corporate governance structure (looking inside) to a private governance one (looking externally at issue-level or industry-level) is difficult enough by itself. One where the main action of the board is to act in an adjudicative manner makes that insanely difficult, because the decisions are not made in a vacuum, but based on those texts.
Conclusion
Are there any actual solutions, or is Facebook’s half-assed attempt at something novel the best we can hope for? In two opinion editorials (one co-authored with the wickedly talented Danielle Tomson), I made the case for multistakeholder solutions, in some capacity, specifically looking at the guideline creation process of content moderation. As mentioned above, the ground work has already been done by Facebook. It may be too late for this oversight board to be switched to one that actively governs rather than adjudicates, but it doesn’t mean that companies, along with other societal stakeholders should not try for novel governance structures.
Citations
Fransen, L. (2012). Multi-stakeholder governance and voluntary programme interactions: legitimation politics in the institutional design of Corporate Social Responsibility. Socio-Economic Review, 10(1), 163–192. https://doi.org/10.1093/ser/mwr029
Gasser, U., Budish, R., & West, S. M. (2015). Multistakeholder as Governance Groups: Observations from Case Studies (SSRN Scholarly Paper No. ID 2549270). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2549270
Hahn, R., & Weidtmann, C. (2016). Transnational Governance, Deliberative Democracy, and the Legitimacy of ISO 26000 Analyzing the Case of a Global Multistakeholder Process. Business & Society, 55(1), 90-129.
Mena, S., & Palazzo, G. (2012). Input and Output Legitimacy of Multi-Stakeholder Initiatives. Business Ethics Quarterly, 22(3), 527–556. https://doi.org/10.5840/beq201222333
Quack, S. (2010). Law, Expertise and Legitimacy. in Transnational Economic Governance: An Introduction. Socio-Economic Review, 8, 3–16.