Quick Thoughts: The Place for Private Governance in the AI Debate?

What follows is a quick post, with minimal editing, or links. A rant, if you will, originally written for my LinkedIn, but turns out I exceeded the character limit by a few thousand, so here it is:

There are a lot of conversations surrounding the #AIAct in the EU, especially as today is the day of the “last trialog” for it. One of the sticking points turns out to do with private governance, and since I spent way too many years of my life studying it, I’ll put in my two cents. But first, let’s back up, and quickly explain what a trialog is and what the AI Act is. And then we’ll get back to the topic at hand (or if you already know both those things, skip the next 2 paragraphs).

For those that aren’t EU legislation nerds: once the EU Parliament has its position on a specific piece of policy, and the EU Council also has its position on that same piece of policy, they metaphorically (and physically) get in a room together, and they bring in the EU Commission as well (hence the tri in trialog) and they hash things out. Once they agree on it, well, congratulations, you’ve got a new piece of legislation (depending on the type of legislation, it can be binding, or not, allow variation, or not, etc.)! Kind of. These agreements can end up being political, in that the sides say the agree *generally* on the boundaries so they get to say they successfully accomplished their task, but will leave crucial choices (the horned one likes to dwell in the nuances after all) up to technical meetings or even further down the road to “implementing acts”. That’s happened before, very recently, with the Digital Services Act, and it’s likely to happen again, with the AI Act in a few hours.

The AI Act is a file (what the EU calls policy topics) that had started a long time ago (if we consider the speed of advancement in AI) and was proposed by the EU Commission in April 2021. Since then the conversation around it has changed a lot, in some important parts, and in some not so important ones. What’s changed even more is the technological context: written in a tiered, risk-based approach based on self-assessments, the Ai Act hit up against the hype of generative AI, which swept the world in late 2022 – early 2023. Considered a different beast than other AI systems, gen AI (or foundation models or general purpose AI or whatever your preferred name) elicited an addition of separate rules to the Parliament’s version of the Act. This meant separate obligations for compliance before that model was released, including risk identification, data governance, ensuring appropriate levels of performance, predictability, safety and cybersecurity, monitoring and mitigating environmental risk, cooperation with downstream providers, a quality management system and technical documentation for 10 years (quite a mouthful, huh?), plus separate additional obligations for a subsection of gen AI which included transparency protections, safeguards that the content generated does not violate EU law, and a summary of training data with regards to copyright.

So we’re all caught up loosely on the AI Act and trialogs. In between several trialogs, some of the member countries of the EU Commission (France, Germany and Italy to be exact) decided that while there isn’t any compromise between the different EU institutions, might as well suggest a wild idea: how about, instead of adding obligations for gen AI, why don’t we remove them altogether and have a mandatory voluntary (yes, it sounds weird to me, too) industry-based code of conduct? And now we’re fully caught up with what’s going on in Brussels (and sometimes Strasbourg). Caveat to all of this being that the generative AI piece is certainly not the only one that may jeopardize the passage of the AI Act, especially when EU member states and the EU Parliament are still wrestling over things like real-time remote biometric identification (something states would like to be able to use in their surveillance capabilities, and the default position from the Parliament has been to ban it), and other similarly important sticking points. However, this new wrinkle by the 3 major Western European countries is important to discuss.

What the FR-DE-IT troika is suggesting is clearly a political action, as at least France is banking on its homegrown start-up, Mistral, to be competitive on the world stage, and it believes that fewer regulatory obligations are better in order for it to catch up to the bigger players (who, incidentally would benefit from this move as well). I’ve gone on the record several times to argue for the usefulness of private governance, co-regulatory mechanisms and other similar tools, as they can fill gaps left by pending legislation, looser regulation, or any other legitimate reason. The US context, for instance, is one where government’s limitations connected to the First Amendment, make such innovative tools very attractive, and if done well, meaningful. However, do not mistake this nuanced support for a full-throated endorsement in any context. While we can quibble over what exactly the obligations should be, and over who exactly they apply to in the supply chain or below certain size metrics, or even what those size metrics are, suggesting removing those obligations wholesale and replacing them with industry-based self-governance is more than a step too far. So much so that even conservative members of the European Parliament have come out against this idea.

Innovation is important, and gen AI can really be at the forefront of a lot of things that would make life better. However, I have yet to be convinced that somehow magically this new generation of companies (and some of the older ones) that are fighting in the AI space will do a better job at self-regulation than almost any other attempt at it before. Pretending that companies will do things right on their own is a misunderstanding of the ethos of all for-profit companies, and the fiduciary duty of its officers (unless it’s OpenAI’s intricate non-profit/for-profit governance structure, at which point, who knows). The White House voluntary agreements lacked anything resembling critical building blocks of an actual governance structure, including timelines or enforcement mechanisms. The failed experiment of the Meta Oversight Board, should be a clear example, that industry alone will always build its own guardrails to benefit itself.

To be meaningful such self-regulatory structures would need some intense overhauls. They would have to have minimum standards of transparency, cybersecurity and information obligations, to make sense substantively, and to gain structural legitimacy, such a structure would require at least adding civil society stakeholders, elevating them to equals in the private governance structure (and in building said structure), and having strong enforcement mechanisms. At which point, how would this kind of structure be truly different than negotiating obligations in the Ai Act?

Leave a Reply

Your email address will not be published.