Algorithms, special forms of artificial intelligence, are influencing our modern world more and more. They analyse a lot of data in a short time and discover new, surprising information and connections. We humans simply place our trust in the potential of algorithms because our minds are not able to understand the complexity of information processing and smart devices often make our lives more pleasant by relieving us of the cognitive work. How is this disruptive technology changing our everyday life, our society and our democracy?
In the latest of her Duet interviews, Dr Caldarola, author of Big Data and Law, and Sophie Borchert discuss the possibility of a shift from democracy towards an algocracy taking place.
Not everyone is familiar with the word algocracy. “Cracy” stands for rule and “algo” comes from algorithm. By “algocracy” do you mean the rule of algorithms? Is algocracy, like democracy, a new form of government? Or merely a form of technology (perhaps even a disruptive technology) and/or a social stimulus?
Ms Borchert: The lack of familiarity with the term is hardly surprising for it is quite new, a neologism in fact. Originally it illustrated the sociological distinction between market and bureaucracy and in that context describes a system in which human behaviour is not controlled by economic mechanisms or the administrative machinery, but by algorithmic processes that link both with each other under the influence of globalisation.
I look at algocracy from the perspective of the political sciences, in particular legal sciences, where it joins similar, in part odd concepts like mediacracy (rule of the media) or gynecocracy (rule of women) – just as well as democracy (rule of the people). Therefore, I interpret algocracy in simple terms as the rule of algorithms, as you recognised correctly, focusing on the governmental aspect of this rule.
Although technology makes up a central part of that model and rule always adheres to social characteristics because of the state’s roots in society, I wouldn’t reduce algocracy to that. It is more than the sum of its parts, namely a form of government or more precisely a form of rule. The difference lies in the fact that the former outlines only the formal construct of a state, the outer structure in which rule is embedded, whereas the latter also comprises the inner workings.
Thus, when talking about algocracy, I mean a system of power based on algorithmic, digitally implemented processes in a public dimension. That has nothing to do with artificial forms of some sort of super-intelligence from the world of science fiction that usurp state authority and take power to rule according to their own “will“ to oppress mankind. No, I refer to far more mundane and latent, but no less powerful trends which might be able to infiltrate democracy.
In this respect, the aim of my study is the analysis of the connecting factors and effects of those trends from a jurisprudential point of view to answer the question whether we are really about to drift from the democratic rule of the people towards an autocratic rule of algorithms.
My favourite quote:
“Any fool can know. The point is to understand!”
Albert Einstein
Did you end up with the provocative title of your dissertation “From democracy to algocracy?” because algorithms are “more intelligent” than the mass intelligence of people or more autocratic because they can process more facts, information and knowledge and are not judgmental?
Defining intelligence is a bit of a tricky issue. Though indefinable for humans, it is quickly attributed to algorithms – often too swiftly, without knowing the technical details.
Algorithms per se are not intelligent, they needn‘t have anything to do with computers. They are basically step-by-step-instructions for the conversion of input into output. All steps are clearly defined in the form of a conditional if-then-scheme, so that each input produces exactly one output. Simple recipes and algebraic functions belong to such deterministic rules. Furthermore, translated into digital code an algorithm can be used as software for data processing, often following statistical or heuristic procedures in order to automate reasoning (inference).
Based on this, Artificial Intelligence (AI) refers to digital systems of algorithms having the ability to “learn“. AI is indeterministic, technically speaking. With regard to a specified target, it is able to react to unpredictable changes in its environment or its input by autonomously adapting its internal code structure. This doesn’t happen on the basis of linear programs consisting of fixed commands, but dynamic chains of actions; after one “if“ come several “then“ commands. The adaption, called machine learning, takes place through training of the algorithmic model by means of various data sets – valuable, not to say the most valuable resources in the age of Big Data.
For this reason, AI is superior to us humans in terms of scale and speed of information processing. But this technology is neither in a position to break through the programmed limits nor to develop creativity, emotions or even a conscious mind. It just imitates human rational thinking and behaviour as so-called Weak AI. We are far from the development of Strong AI and a technological singularity.
Nevertheless, an enormous hazard is already inherent even in Weak AI. It is true that in the absence of its own will AI is incapable of forming its own opinion. In fact, it is neutral. But human judgement can become apparent in technical biases, either in the design of the machine learning model, in the training of data or during its practical application after the training. In connection with another weak spot fatal consequences can occur: The continuous adaption of implemented rules and with that the consistence of the output cannot be retraced by the human user, especially not in case of deep learning occurring in Artificial Neural Networks. Consequently, biases remain undetected and possibly false results are accepted without question. When they contribute to governmental decisions, like a sentence in court or a rejection of an administrative application, the democratic character of these decisions must be called into question.
Let’s go back to the word “rule” for a moment. Can only subjects rule or what about a technology? You said you defined algocracy as “a government-scale system of rule based on algorithmic, digitally implemented processes”. Have I understood you correctly in that algorithms are a technical tool of a ruling authority? Then who is the ruler – the one who develops and monopolises the algorithms (e.g. GAFAM), or the people who use them? Who or what legitimises rule? Furthermore, your quote from Albert Einstein suggests that it is crucial to “understand”? Do we understand algorithms? Can people consciously decide for or against digital processes in the age of fake news, deep fakes, and agnostics, which Daniel Goeudevert criticised in his Duet Interview? Or are we powerless in the face of them because we cannot understand them – especially given their complexity and speed – and do we trust them more than our own knowledge and instincts? I see a risk of us losing the ability to critically examine something due to algorithms and meanwhile our sense of judgment, decision-making and problem-solving diminishes as a consequence. It is questionable whether an individual with doubts about the “correctness” of the algorithmic can result, but without a lobby and funding, would they even be in a position to critically view the algorithmic system and its effects? This begs the question: will algorithms take on a life of their own and actually become the new ruling force – especially when they learn to supply themselves with the necessary electricity and doesn’t that make us think of AI weapons that can then no longer be switched off?
Actually, you have already answered the first and the last question by yourself. Algocracy is based on algorithmic processes, similar to epistocracy being based on knowledge and plutocracy on money. Nevertheless, algorithms don’t elevate themselves to being rulers, but remain a mere vehicle. That might sound rather strange and theoretical. In this regard, let me explain the intellectual basis created by the sociologist Max Weber. He defined power as the “opportunity to impose one’s will in a social relationship even if there is some opposition, regardless of what the opportunity has been based on“ (own translation). It makes up the essence of rule, which, for its part, Weber defines as an “opportunity for a certain type of order to be obeyed by an assigned group of people“(own translation). Weak AI, the only technically feasible device at the moment, lacks important qualities, such as the desire for power, that are necessary for it to qualify as rule. If one day the creation of a Super-AI with an own mind should be successful, the situation might be a different one. But then the world would be turned upside down anyway, not only with regard to constitutional law.
The next question is harder to answer. Who is the actual ruler, if not algorithms themselves, especially when their lack of transparency forces us to react to their output unintelligibly? My answer may sound unsatisfactory, but I don’t know (yet). In the matter of reining in algorithms, the chance of obedience depends on understanding and controlling them. Only the person who can articulate an order knows how to make use of the implemented medium. And only someone who can ensure that the medium has carried out the order completely can expect obedience and the recipient is able to retrace it. This have to be the people as the sovereign in a democracy according to Article 20 (2) of the Grundgesetz (German Basic Law, GG) and yet given the opacity, complexity and autonomy of intelligent programs, which are used by new generations of media to personalise the flow of information and by the government to increase the efficiency of its task fulfilment, that supposition will be a challenge. I’m particularly interested in whether people are able to have free will at all and whether that can be converted by state representatives given that algorithms can seemingly influence opinion and decision making. If not, people are no longer ruling but rather the group controlling algorithms are those in power, such as their developers, operators or data managers – as „algocrats“. Another option is that the algocratic tendencies assume smaller proportions than feared and people, in spite of democratic deficits, keep their sovereignty. I haven’t come to a final conclusion on that so far.
Who or what legitimates algocratic rule, that was the subject of your third question. It was Max Weber again who formulated important ideas concerning that issue by identifying power as a very volatile, unstable phenomenon. Thus, true rule requires stabilising the balance of power, represented by another attribute: legitimacy. Legitimacy means acceptance, in other words the justification of governmental dominion, which arises from the conviction of those being ruled that it is legitimate. In this context Weber set up three idealised forms of legitimate rule. Approval of the performing of power results either from a belief in conformity with a formal, normative set of rules and regulations (rational dominion), in the integrity of traditional, sacrosanct hierarchies (traditional dominion) or in the aura of the person in charge of power (charismatic dominion) and the command accepted on that basis. With regard to algocracy, I have nearly run out of explanations once again. If people retain sovereignty under algorithmic influence, then legitimacy is ensured according to Article 20 (2) GG. But if algocratic violations become apparent in the democratic system, there is a need to clarify whether this new form of dominion is still legitimate or whether it oversteps the boundary of illegitimacy. Going by Weber’s trilogy, you could take rational rule into consideration, if algorithms were able to consistently follow the legal system and its inherent values – which they are in fact not capable of because of a lack of technical logic. Alternatively, the legitimacy of algocracy might come from charismatic dominion – if you assume that citizens trust the perfection of algorithmic information processing without question (automation bias). In fact, it is not that easy because a purely sociological term of legitimacy cannot be applied to political and legal sciences that easily. Actual approval isn’t enough. A normative aspect is necessary if dominion is to be legally legitimate; public recognition must be provided by the law. That’s not the case with regard to blind trust in technology.
Your title suggests that there is a shift from democracy to algocracy. We often hear in the media that democracy is being “threatened” and indeed research has long shown that democracy is in decline. Does algorithmisation – as a disruptive technology – endanger democracy or are there other reasons for this decline? Is it methods like the echo chambers that shape the opinion of a group which form a “dominant” opinion and sift out “insignificant opinions”?
The fact that there is a constant transformation between democracy and algocracy is not new. Both of them are opposite ideal types that don’t really exist in that specific form. In every state, processes of democratisation and autocratisation are constantly taking place.
Currently many countries are being seized by a so-called third wave of autocratisation. As we know democracy has already been threatened in the past. What’s new about this threat is the variance and simultaneity of those threats. Democracy is in the throes of a crisis of unprecedented extent, called polycrisis which is occurring with increasing frequency. At the turn of the millennium, more people lived in states with deteriorating rather than improving democratic conditions. In the year 2020 only 4 % of the world population lives in the latter situation.
The reasons for this are diverse and often reinforce each other. Democracy is endangered by globalisation and the accompanying decline of states and parliaments. Boundaries become blurred, values begin to sway and bonding forces that have kept society together in its previous shape fade. On top of that we cannot neglect the climate crisis that shows democratic states have thus far been incapable of acting and are seemingly helpless due to poor attempts at managing the situation and thus the gap in an already fragmented society has increased. Algorithmisation, the spread of algorithmic applications, adds more fuel to that explosive mixture.
In this context, I distinguish between two areas: algorithmisation affects the establishment and the exertion of democratic rule – the establishment you alluded to in your question. Democracy is the dominion of the people. People are the sovereign, from whom all power in a state must come from and to whom it must be attributable, according to Article 20 (2) GG. That happens by a process of objective formation, during the course of which a collective will develops out of the individual opinions that eventually become part of the state will. When algorithms are implemented into that process and limit the setting of the formation of a personal opinion in the internet (where it happens increasingly), the reputation of the emerging democratic system is tarnished from the very start. Majority and minority opinions are nipped in the bud. The diverse range of opinions is both homogenised and polarised.
Democracy, however, lives from the very plurality of a constructive political discourse. Of course, we cannot just blame any isolation in imaginary spaces on filter algorithms and web tracking by cookies. Humans naturally tend to focus their attention on information that match their own view, and to ignore such information that challenges it (selective exposure). Opinion-forming content is designed to suit personal preferences as well, but by natural self-selected personalisation in the echo chambers you mentioned, not by algorithmic pre-selected personalised settings in filter bubbles. Traditional media like newspaper and television have to filter their content, too. However, the personalised filtering in the internet exceeds the human-driven selection, for example, in case of amplifier-algorithms that intentionally present eye-catching news to prolong the time spent on the website.
Algorithms are complex and, depending on their characteristics, they lack transparency and thus traceability, and they take on – also autonomously – an ever-increasing number of human tasks. In the event of their “takeover of power”, are they dependent on the legitimacy of the people, meaning would they have to be elected, because they can make decisions and quickly develop a basis for decisions based on the amount of data? Or are algorithms tools and will they remain technical tools that only require legal regulation such as the EU’s AI regulation which specifies what they are permitted and under what “conditions”? If so, can the people develop guidelines for dealing with these “intelligent”, possibly even “more intelligent” algorithms?
As I already mentioned, when algorithms are employed by the state’s executive, legislative or judicial authority, they affect the exertion of democratic rule. The crucial point concerns democratic legitimation of sovereign decisions, justification of the government acting in specific situations to exert their rule , which is an indispensable condition for a democracy.
State authority which stems from the people must be accountable to them according to Article 20 (2) GG – please pardon my repetition, but the principle of the people’s sovereignty can’t be emphasised often enough. The prevailing view differentiates between two types of legitimations: 1) Personal legitimation refers to the justification of the person deciding (Is the decision to be made by this person?) while 2) factual legitimation considers the justification of the decision on its merits (Should this decision be made?). Observing a certain level of legitimation is sufficient so that deficits of the one element can be balanced by the other. A complete substitution is impossible.
A questionable issue is already there when we look at the qualification of the algorithmic output as a decision. Even the most intelligent, entirely autonomous program is to be classified as belonging to Weak AI and therefore bound to the coded requirements. At any rate it should be clear to everyone that an algorithm can’t be elected by itself because being elected is a personal decision demanding an eligible subject.
Nevertheless, the output of an algorithm can have a controlling effect that moulds its decisional character and makes it a powerful instrument. It is to be noted that fully automated decisions are subject to strict limitations. But semi-automated decisions can also have a considerable impact on human behaviour because of their supposed unimpeachable status (automation bias).
In this case, the accountability of the output exerts a profound influence on democratic legitimation, so that an algorithmic decision is still considered to be part of the people’s will. If this becomes impossible because of the lack of transparency of a “black box algorithm “, deficits in legitimation arise. They can affect personal legitimation, when a decision is attributed to an official or institution in charge in a way that is unclear, or factual legitimation, when the algorithmic decision-making corresponds not to the law, but to its own legal terms which are in some way hard to follow.
Whether and how far such legitimation deficits can be compensated for is another difficult question. Legal control, such as the AI regulation of the European Union, is vital for such a (high) risk technology in any case. However, if AI is not banned completely, the effectiveness of the regulators depends on its technological capability to x‑ray the black box. If science fails to do so, all requirements on verification, documentation and software quality assurance must remain paper tigers.
Many companies try to directly reach end consumers to increase sales. Algorithms also enable the state to get in “direct” contact with citizens and to take their wishes into account – the best requirement for a democracy. Are algorithmically generated filter bubbles therefore necessary for consensus building, and, if so, why?
You are thinking about algorithmically driven support of direct democratic elements, I suppose. To be honest, I doubt that a direct democracy really is the “best“ form of democracy. It surely has its benefits as seen in Switzerland or the ancient Attic democracy which are often upheld as shining examples of this form of democracy. But the good performance of those systems is caused by historic, territorial or structural factors which aren’t found everywhere, and certainly not in Germany.
On the other hand, direct democratic elements can help enhance implementing the principle of the people’s sovereignty qualitatively in a representative democracy.
A deliberative democracy, which focuses on public deliberation in order to integrate citizens in decision-making, is a good example for our discussion. The more convincing argument in terms of its content and not (only) the formal majority should determine a decision because by considering all arguments an agreement on the “best“ option can then be reached – at least theoretically. Thinking one step beyond, a deliberative process could gain a lot from algorithmic applications, for example, online fora or bots that evaluate and process the range of opinions.
But I wouldn’t claim that filter bubbles are necessary for reaching a consensus. First of all, there is a difference between private companies that follow economic calculations and make a pre-selection from the information presented and non-commercial institutions which put all the information together for deliberative purposes. Secondly, any consensus depends on a given quorum. Assuming unanimity, a highly deliberative procedure might be suitable to achieve and not to block a democratic process.
When a simple or qualified majority is sufficient to make a decision, such as in the German legal system, it would be inappropriate to try and conceive of a decision process where all participants must share the result. An established representative democracy accepts that there will always be a minority with a different view and provides rights and privileges for its protection. Thus, a massive extension of direct democratic elements driven by algorithms is not really necessary at this time.
We are currently confronted with many crises; you yourself have used the keyword “polycrisis”. Let’s look at the environmental crisis. Some claim that capitalism will shrink at this time due to a lack of fossil fuels so that we cannot deal with the environmental crisis without chaos ensuing. They even go so far as to assert that a democratically planned economy is required for the distribution and rationing of scarce resources (e.g. drinking water, energy). Are algorithms suitable for a “democratically” planned economy because they can oversee and analyse the complex situations more quickly and perhaps make “right/fair” decisions regarding distribution and rationing?
The economy has to be redesigned in order to successfully deal with climate change. Whether that means rejecting capitalism, I cannot say as I am no economist.
One thing is certain – if we want to stick to capitalism, we must reform it, at least in the places where it hinders climate protection and coping with its consequences. Especially in this context, (more or less) intelligent, fast, (seemingly) neutral and faultless algorithms seem to an obvious choice. In moderate forms they are already being used for ecological purposes, for example in “Smart Farming“ in agriculture, when field robots are taught via machine learning where to weed, fertilise or sow. On a larger scale, parcels and machines are interconnected and external data sources about weather and topology are integrated to optimise sowing and harvesting strategies. With “Smart Metering“ in energy industries intelligent measurement systems contribute to coordinating the generation and consumption of energy and therefore to improving energy management.
A far more radical step would be a transformation of the free market economy into a planned economy, where the scarcity in resources would force state intervention. This involvement could start with local bottlenecks, and then extend to regional, national or even global levels.
If this path were chosen by politics, I would support using algorithms to solve distribution issues. These depend on more variables than one person or even whole task forces could ever consider, let alone evaluate correctly. Big Data, gigantic data stocks lying more or less organised in data warehouses, data lakes or data swamps, and their processing methods might provide the necessary due diligence.
Others rely on human innovative power to overcome crises. Will algorithms find faster and better solutions than humans and are they therefore the solution to being climate-neutral by 2045 in accordance with legal requirements, despite a possibly long market launch due to testing, studies etc.?
I think we have to abandon the idea that we are able to cope with the many crises which we caused by only using our natural cognitive resources. By using algorithms to connect, communicate and transact globally, moving huge amounts of data while doing so – in 2020 the estimated global data volume came to 50 zettabytes – in fractions of a second, we have opened the proverbial Pandora’s box and relinquished our hold on technology.
Digital dimensions know neither space nor time, at least not as we perceive it, and have developed processes we can’t intervene with, even though we persuade ourselves into thinking so. We live in a VUCA world, the characteristics of which, such as volatility, uncertainty, complexity and ambiguity, are reflected in every major crisis of our time. Instrumentalising algorithms to navigate this world is an arduous task, but I don’t see any alternative. Even if we don’t reach climate neutrality in 2045 by using algorithms because reforms and innovation in politics and science are being blocked, without them we are surely never going to achieve that goal.
Mr Hans-Jürgen Jakobs said that the new “ism” was monopoly. Prof Dr Schupp advocates an unconditional basic income, while others advocate the shrinking of capitalism, and you are investigating whether algocracy will soon gain strength. What will our society look like in the future? Is an algocracy the desired system change that will guarantee success in overcoming these various crises?
I am afraid neither Mr Jakobs, Prof Dr Schupp nor I can offer a fool-proof recipe for success here. There is not just one switch that changes everything for the better. The thought is appealing, but utopian, unfortunately.
Just as the polycrisis consists of several single crises – climate crisis, democracy crisis (because of algocratic tendencies among other things), economic crisis, security crisis – coping with the polycrisis has to pursue a multimodal strategy. Isolated attempts to rescue the climate, to reconcile all nations for world peace or to restore the integrity of democracy, are the first steps in the right direction. But without going hand in hand, they won’t achieve a sustainable effect.
Ms Borchert, thank you for sharing your insights on the different aspects regarding algocracy.
Thank you, Dr Caldarola, and I look forward to reading your upcoming interviews with recognised experts, delving even deeper into this fascinating topic.