Facebook’s Oversight Board Can’t Intervene, So Stop Asking

This post originally appeared on Techdirt with the title Facebook’s Oversight Board Can’t Intervene, So Stop Asking:

As Facebook employees stage a digital walk-out and make their thoughts known about the social media giant’s choice to not intervene in any way on “political posts”, especially those of President Donald Trump, some have called for the newly-created Oversight Board to step up and force a change in Facebook. While the official answer is that they can’t start (because supposedly they haven’t given out laptops yet), the real and very simple reason why the Facebook Oversight Board won’t get involved is because it can’t. It’s not created to function that way, it’s not staffed for something like this, and ultimately, due to its relationship with Facebook, anything it would say on this matter right now would be taken in an advisory capacity at best. Facebook, understandably not wanting to actually give any of its power away, played confidence games with the idea of external, independent oversight, and it’s clear that they fooled a lot of people. Let me explain.

In three-card-monte, the huckster keeps shuffling three playing cards until the victim is likely to guess wrong on where the “money card” may be hiding, and proceeds to flop the cards one by one. For Facebook’s prestidigitation on content moderation, last month’s announcement of the initial 20 highly-regarded experts tapped as members for its independent oversight board is the second card flop, and predictably, the money card is not there.

The ongoing sleight of hand performed by Facebook is subtle but fundamental. The board was set up as truly independent, in every way, from member to case selection and to the board’s internal governance. In terms of its scope and structure, it is guided by previously-released bylaws to primarily handle a small set of content removal cases (which come up to the board after exhausting the regular appeals process), and dictate Facebook to change its decisions in those cases. To a much lesser extent, the Board can, although time and resources are not allocated for this, provide input, or recommendations about Facebook’s content moderation policies, however, Facebook is not obligated in any way to follow those policy recommendations, but to simply respond in 30 days and talk about any action it may take.

In the pages of the San Francisco Chronicle’s Open Forum, and elsewhere, I and others have called attention to this empty action as far back as September 2019, at the first card flop, the public release of the Board’s charter and bylaws. The project continued unabated and unchanged as friendly experts extolled the hard work of the team and preached optimism. Glaring concerns over the Board’s advisory-at best, non-binding overall power, not only weren’t addressed, but actually dismissed by cautioning that board member selection, last month’s flop, would be where the money card is. Can you spot the inconsistency? It doesn’t matter if you have the smartest independent advisors if you’re not giving them the opportunity to actually impact what you do. Of course, the money card wasn’t there.

In early May, the Menlo Park-based company released the list of its Oversight Board membership, with impressive names (former heads of state, Nobel Prize laureates and subject matter experts from around the world). Because the Board is truly independent, Facebook’s role was minimal, beyond coming up with said structure and bylaws with the consultation of experts from around the world (full disclosure: the author was also involved in one round of consultations in mid 2019), it only directly chose the 4 co-chairs who then were heavily involved in the choice of the other 16 members. A lot of chatter around this announcement focused, predictably, on who the people are; is the board diverse; is it experienced enough, etc, while some, have even focused on how independent the board truly is. As the current crisis is showing, none of that matters.

As we witness the Board’s institutionalized, structural and political inability to perform oversight it is becoming entirely clear that Facebook is not, at all, committed to fixing its content moderation problems in any meaningful way, and that political favor is more important than consistently applied policies. There is no best case scenario anymore as the Board can only fail or infect the rest of the industry. And what is a lose-lose for all of us will likely still be a win-win for Facebook.

The bad case scenario is the likeliest: the Board is destined to fail. While Zuckerberg’s original ideas of transparency and openness were great on paper, the Board quickly turned into just a potential shield against loud government voices (such as Big Tech antagonist Sen. Hawley). Not only is that not working, Sen. Hawley responded to the membership list with even harsher rhetoric, but the importance placed on the optics versus the reality of solving this problem is even more obvious now. Giving the Board few, if any, real leverage mechanisms over the company can at most build a shiny Potemkin village and not an oversight body. If we dispense with all the readily-available evidence to the contrary, and give Facebook the benefit of the doubt that it tried, the alternative reasons for this rickety and impotent construction are not much better. It may be because giving a final say over difficult cases, the Board’s main job, is not something Facebook was comfortable with doing by itself anyway (and who can blame them given the pushback the platform gets with any high-profile decision). Or it may be because of a bizarre allegiance to the flawed constitutional law perspective that Facebook can build itself a Supreme Court, which makes the Board act as an appellate court of sorts, with a vague potential for creating precedent rather than truly providing oversight.

If the Board’s failure doesn’t tarnish the perspective of a legitimate private governance model for content moderation, there’s a lot to learn on how to avoid unforced errors. First, we can safely say that while corporations may be people, they are definitely not states. Creating a pseudo judiciary without any of the accouterments of a liberal-democratic state, such as a hard-to-change constitution, co-equal branches and some sort of social contract is a recipe for disaster. Second is a fact that theory, literature and practice have long argued: structure fundamentally dictates how this type of private governance institution will run. And with an impotent Board left to mostly bloviate after the fact, without any real means to make changes to the policies themselves, this structure clearly points to a powerless but potentially loud “oversight” mechanism, pushed to the front, as a PR stunt, but unequipped to deal with the real problems of the platform. Finally, we see that even under intense pressure from numerous and transpartisan groups, and a potential openness to fixing a wicked problem, platforms are very unwilling to actually give up, even partly, their role and control in moderating content, but will gladly externalize their worst headaches. If their worst headaches were aligned with the  concerns of their users, that would be great, but creating “case law” for content moderation is an exercise in futility, as the company struggles to reverse-engineer Trump-friendly positions with its long-standing processes. We don’t have lower court judges who get to dutifully decide whether something is inscribed in the board’s previous actions. We have either underworked, underpaid and scarred people making snap decisions every minute, or irony and nuance illiterate algorithms who are poised to interpret these decisions mechanically. And more to the point, we have executives deciding to provide political cover to powerful players rather than enforce their own policies, knowing full well they’re not beholden to any oversight, since even if already up and running, by the time the Board ruled on this particular case, if ever, the situation would have since no longer been of national importance.

As always, there still is a solution. The Oversight Board may be beyond salvaging, but the idea of a private governance institution, where members of the public, civil society, industry and even government officials, can come together and try to reach a common ground for what the issues are and what the solutions might be, should still flourish, and should not be thrown away simply because Facebook’s initial attempt was highly flawed. Through continued vigilance and genuine, honest critiques of its structure and real role in the Facebook ecosystem, the Oversight Board can, at best, register as just one experiment of many, not a defining one, and we can soldier on with more diverse, inclusive, transparent, and flexible, industry-wide dialogues and initiatives.

The worst case scenario is if the Board magically coasts through without any strong challenge to its shaky legitimacy, or its impotent role. The potential for this to happen is there, since there are more important things in the world to worry about than whether Facebook’s independent advisory body has any teeth. In that case Facebook intends to, one way or another, franchise it to the rest of the industry. And that would be the third, and final flop. However, as I hope you figured it out by now, the money card wouldn’t be there either. The money card, the card that Facebook never actually intended on giving away or even showing us, the power over content moderation policies, was never embedded in the structure of the board, its membership or any potential industry copycats that could legitimize it. This unexpected event allowed us to take a peek at the cards, the money card is still where it was all along, in Facebook’s back pocket.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>