On regulation of algorithms
And I don’t mean in the obvious way that the adjective “artificial” leads to philosophical thought and debate about the essence of intelligence and therefore the essence of human nature. No news there. Just ask Daniel Dennett, or any philosopher of the mind.
I mean that AI made us think again about the ethics and politics of computerized systems.
Recently, I have noticed several voices call to “regulate algorithms”.
Taken at face value, this phrase makes no sense. An algorithm is a recipe: a sequence of steps to solve a class of problems in a deterministic way. Surely, there is no need to regulate any algorithm in this sense.
What the call for regulation actually intends to target is not what is classically known as an algorithm, but what only recently has started to be called an algorithm, namely: software systems that employ non-algorithmic components to make autonomous decisions.
To understand what is going on, you need to know that the word “algorithm” has shifted meaning in two steps:
- Auto-antonym: The word “algorithm” has come to mean its own opposite. Originally, computer scientists have used it to mean: a set of instructions to deterministically solve a problem. In the context of AI, the popular meaning of “algorithm” has become: a program trained by example to inscrutably solve a problem.
- Pars pro toto: The word algorithm is no longer reserved for the AI component that does its task non-algorithmically, but is now also used to refer to the entire system of which it is a part.
Like a Möbius strip or an alligator biting its own tail upside down, the meaning of the world “algorithm” has shifted from “deterministic recipe” to “computerized system with non-algorithmic, data-driven decision-making components”.
And, actually, it is the ingredients of “data” and “decision” in this newly reborn notion of “algorithm” that explains why there is reason for ethical concerns and political debate, and hence for the call for regulation.
Algorithms (in the new sense) make use of data — our data, the data of citizen-consumers — to make decisions that affect us. Affect our assets, movements, jobs, liberties, rights. In short: that affect our lives.
Whenever personal data is handled, privacy is a concern. And privacy is intimately connected to the autonomy and agency of the individual.
Whenever life-affecting decisions are made, accountability is a concern: our ability to scrutinize and call into question the reasons for a decision, and therefore its legitimacy.
So, while computerized systems have been around and influential in our lives for half a century at least, their increased use of our data and increased power to make decisions indeed justifies to think again about their ethics and politics.
My recommendation to those engaged in that debate, whether they are politicians, policy-makers, citizens, or (computer) scientists, is to reflect for a moment on the words they employ. Clarity of thought requires clarity of language. Make sure you are clear at least on what “algorithm” means.
NB: Interestingly, the Algorithmic Accountability Act of the US Congress only uses the word “algorithm” twice. Once in its title, and once to explain that this is merely the short title of the actual act that then goes on to state its true subject of regulation to be “automated decision systems”. A regulation with clear content and a confusing title.
To direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct…