Are algo­rithms tak­ing over?

Sophie Borchert

Algo­rithms, spe­cial forms of arti­fi­cial intel­li­gence, are influ­enc­ing our mod­ern world more and more. They analyse a lot of data in a short time and dis­cov­er new, sur­pris­ing infor­ma­tion and con­nec­tions. We humans sim­ply place our trust in the poten­tial of algo­rithms because our minds are not able to under­stand the com­plex­i­ty of infor­ma­tion pro­cess­ing and smart devices often make our lives more pleas­ant by reliev­ing us of the cog­ni­tive work. How is this dis­rup­tive tech­nol­o­gy chang­ing our every­day life, our soci­ety and our democracy?

In the lat­est of her Duet inter­views, Dr Cal­daro­la, author of Big Data and Law, and Sophie Borchert dis­cuss the pos­si­bil­i­ty of a shift from democ­ra­cy towards an algo­c­ra­cy tak­ing place.

Not every­one is famil­iar with the word algo­c­ra­cy. “Cra­cy” stands for rule and “algo” comes from algo­rithm. By “algo­c­ra­cy” do you mean the rule of algo­rithms? Is algo­c­ra­cy, like democ­ra­cy, a new form of gov­ern­ment? Or mere­ly a form of tech­nol­o­gy (per­haps even a dis­rup­tive tech­nol­o­gy) and/or a social stimulus?

Ms Borchert: The lack of famil­iar­i­ty with the term is hard­ly sur­pris­ing for it is quite new, a neol­o­gism in fact. Orig­i­nal­ly it illus­trat­ed the soci­o­log­i­cal dis­tinc­tion between mar­ket and bureau­cra­cy and in that con­text describes a sys­tem in which human behav­iour is not con­trolled by eco­nom­ic mech­a­nisms or the admin­is­tra­tive machin­ery, but by algo­rith­mic process­es that link both with each oth­er under the influ­ence of globalisation.

I look at algo­c­ra­cy from the per­spec­tive of the polit­i­cal sci­ences, in par­tic­u­lar legal sci­ences, where it joins sim­i­lar, in part odd con­cepts like medi­acra­cy (rule of the media) or gynecoc­ra­cy (rule of women) – just as well as democ­ra­cy (rule of the peo­ple). There­fore, I inter­pret algo­c­ra­cy in sim­ple terms as the rule of algo­rithms, as you recog­nised cor­rect­ly, focus­ing on the gov­ern­men­tal aspect of this rule.

Although tech­nol­o­gy makes up a cen­tral part of that mod­el and rule always adheres to social char­ac­ter­is­tics because of the state’s roots in soci­ety, I wouldn’t reduce algo­c­ra­cy to that. It is more than the sum of its parts, name­ly a form of gov­ern­ment or more pre­cise­ly a form of rule. The dif­fer­ence lies in the fact that the for­mer out­lines only the for­mal con­struct of a state, the out­er struc­ture in which rule is embed­ded, where­as the lat­ter also com­pris­es the inner workings.

Thus, when talk­ing about algo­c­ra­cy, I mean a sys­tem of pow­er based on algo­rith­mic, dig­i­tal­ly imple­ment­ed process­es in a pub­lic dimen­sion. That has noth­ing to do with arti­fi­cial forms of some sort of super-intel­li­gence from the world of sci­ence fic­tion that usurp state author­i­ty and take pow­er to rule accord­ing to their own “will“ to oppress mankind. No, I refer to far more mun­dane and latent, but no less pow­er­ful trends which might be able to infil­trate democracy.

In this respect, the aim of my study is the analy­sis of the con­nect­ing fac­tors and effects of those trends from a jurispru­den­tial point of view to answer the ques­tion whether we are real­ly about to drift from the demo­c­ra­t­ic rule of the peo­ple towards an auto­crat­ic rule of algorithms.

My favourite quote:

“Any fool can know. The point is to understand!”

Albert Ein­stein

Did you end up with the provoca­tive title of your dis­ser­ta­tion “From democ­ra­cy to algo­c­ra­cy?” because algo­rithms are “more intel­li­gent” than the mass intel­li­gence of peo­ple or more auto­crat­ic because they can process more facts, infor­ma­tion and knowl­edge and are not judgmental?

Defin­ing intel­li­gence is a bit of a tricky issue. Though inde­fin­able for humans, it is quick­ly attrib­uted to algo­rithms – often too swift­ly, with­out know­ing the tech­ni­cal details.

Algo­rithms per se are not intel­li­gent, they needn‘t have any­thing to do with com­put­ers. They are basi­cal­ly step-by-step-instruc­tions for the con­ver­sion of input into out­put. All steps are clear­ly defined in the form of a con­di­tion­al if-then-scheme, so that each input pro­duces exact­ly one out­put. Sim­ple recipes and alge­bra­ic func­tions belong to such deter­min­is­tic rules. Fur­ther­more, trans­lat­ed into dig­i­tal code an algo­rithm can be used as soft­ware for data pro­cess­ing, often fol­low­ing sta­tis­ti­cal or heuris­tic pro­ce­dures in order to auto­mate rea­son­ing (infer­ence).

Based on this, Arti­fi­cial Intel­li­gence (AI) refers to dig­i­tal sys­tems of algo­rithms hav­ing the abil­i­ty to “learn“. AI is inde­ter­min­is­tic, tech­ni­cal­ly speak­ing. With regard to a spec­i­fied tar­get, it is able to react to unpre­dictable changes in its envi­ron­ment or its input by autonomous­ly adapt­ing its inter­nal code struc­ture. This doesn’t hap­pen on the basis of lin­ear pro­grams con­sist­ing of fixed com­mands, but dynam­ic chains of actions; after one “if“ come sev­er­al “then“ com­mands. The adap­tion, called machine learn­ing, takes place through train­ing of the algo­rith­mic mod­el by means of var­i­ous data sets – valu­able, not to say the most valu­able resources in the age of Big Data.

For this rea­son, AI is supe­ri­or to us humans in terms of scale and speed of infor­ma­tion pro­cess­ing. But this tech­nol­o­gy is nei­ther in a posi­tion to break through the pro­grammed lim­its nor to devel­op cre­ativ­i­ty, emo­tions or even a con­scious mind. It just imi­tates human ratio­nal think­ing and behav­iour as so-called Weak AI. We are far from the devel­op­ment of Strong AI and a tech­no­log­i­cal singularity.

Nev­er­the­less, an enor­mous haz­ard is already inher­ent even in Weak AI. It is true that in the absence of its own will AI is inca­pable of form­ing its own opin­ion. In fact, it is neu­tral. But human judge­ment can become appar­ent in tech­ni­cal bias­es, either in the design of the machine learn­ing mod­el, in the train­ing of data or dur­ing its prac­ti­cal appli­ca­tion after the train­ing. In con­nec­tion with anoth­er weak spot fatal con­se­quences can occur: The con­tin­u­ous adap­tion of imple­ment­ed rules and with that the con­sis­tence of the out­put can­not be retraced by the human user, espe­cial­ly not in case of deep learn­ing occur­ring in Arti­fi­cial Neur­al Net­works. Con­se­quent­ly, bias­es remain unde­tect­ed and pos­si­bly false results are accept­ed with­out ques­tion. When they con­tribute to gov­ern­men­tal deci­sions, like a sen­tence in court or a rejec­tion of an admin­is­tra­tive appli­ca­tion, the demo­c­ra­t­ic char­ac­ter of these deci­sions must be called into question.

Let’s go back to the word “rule” for a moment. Can only sub­jects rule or what about a tech­nol­o­gy? You said you defined algo­c­ra­cy as “a gov­ern­ment-scale sys­tem of rule based on algo­rith­mic, dig­i­tal­ly imple­ment­ed process­es”. Have I under­stood you cor­rect­ly in that algo­rithms are a tech­ni­cal tool of a rul­ing author­i­ty? Then who is the ruler – the one who devel­ops and monop­o­lis­es the algo­rithms (e.g. GAFAM), or the peo­ple who use them? Who or what legit­imis­es rule? Fur­ther­more, your quote from Albert Ein­stein sug­gests that it is cru­cial to “under­stand”? Do we under­stand algo­rithms? Can peo­ple con­scious­ly decide for or against dig­i­tal process­es in the age of fake news, deep fakes, and agnos­tics, which Daniel Goeude­vert crit­i­cised in his Duet Inter­view? Or are we pow­er­less in the face of them because we can­not under­stand them – espe­cial­ly giv­en their com­plex­i­ty and speed – and do we trust them more than our own knowl­edge and instincts? I see a risk of us los­ing the abil­i­ty to crit­i­cal­ly exam­ine some­thing due to algo­rithms and mean­while our sense of judg­ment, deci­sion-mak­ing and prob­lem-solv­ing dimin­ish­es as a con­se­quence. It is ques­tion­able whether an indi­vid­ual with doubts about the “cor­rect­ness” of the algo­rith­mic can result, but with­out a lob­by and fund­ing, would they even be in a posi­tion to crit­i­cal­ly view the algo­rith­mic sys­tem and its effects? This begs the ques­tion: will algo­rithms take on a life of their own and actu­al­ly become the new rul­ing force – espe­cial­ly when they learn to sup­ply them­selves with the nec­es­sary elec­tric­i­ty and doesn’t that make us think of AI weapons that can then no longer be switched off?

Actu­al­ly, you have already answered the first and the last ques­tion by your­self. Algo­c­ra­cy is based on algo­rith­mic process­es, sim­i­lar to epis­toc­ra­cy being based on knowl­edge and plu­toc­ra­cy on mon­ey. Nev­er­the­less, algo­rithms don’t ele­vate them­selves to being rulers, but remain a mere vehi­cle. That might sound rather strange and the­o­ret­i­cal. In this regard, let me explain the intel­lec­tu­al basis cre­at­ed by the soci­ol­o­gist Max Weber. He defined pow­er as the “oppor­tu­ni­ty to impose one’s will in a social rela­tion­ship even if there is some oppo­si­tion, regard­less of what the oppor­tu­ni­ty has been based on“ (own trans­la­tion). It makes up the essence of rule, which, for its part, Weber defines as an “oppor­tu­ni­ty for a cer­tain type of order to be obeyed by an assigned group of people“(own trans­la­tion). Weak AI, the only tech­ni­cal­ly fea­si­ble device at the moment, lacks impor­tant qual­i­ties, such as the desire for pow­er, that are nec­es­sary for it to qual­i­fy as rule. If one day the cre­ation of a Super-AI with an own mind should be suc­cess­ful, the sit­u­a­tion might be a dif­fer­ent one. But then the world would be turned upside down any­way, not only with regard to con­sti­tu­tion­al law.

The next ques­tion is hard­er to answer. Who is the actu­al ruler, if not algo­rithms them­selves, espe­cial­ly when their lack of trans­paren­cy forces us to react to their out­put unin­tel­li­gi­bly? My answer may sound unsat­is­fac­to­ry, but I don’t know (yet). In the mat­ter of rein­ing in algo­rithms, the chance of obe­di­ence depends on under­stand­ing and con­trol­ling them. Only the per­son who can artic­u­late an order knows how to make use of the imple­ment­ed medi­um. And only some­one who can ensure that the medi­um has car­ried out the order com­plete­ly can expect obe­di­ence and the recip­i­ent is able to retrace it. This have to be the peo­ple as the sov­er­eign in a democ­ra­cy accord­ing to Arti­cle 20 (2) of the Grundge­setz (Ger­man Basic Law, GG) and yet giv­en the opac­i­ty, com­plex­i­ty and auton­o­my of intel­li­gent pro­grams, which are used by new gen­er­a­tions of media to per­son­alise the flow of infor­ma­tion and by the gov­ern­ment to increase the effi­cien­cy of its task ful­fil­ment, that sup­po­si­tion will be a chal­lenge. I’m par­tic­u­lar­ly inter­est­ed in whether peo­ple are able to have free will at all and whether that can be con­vert­ed by state rep­re­sen­ta­tives giv­en that algo­rithms can seem­ing­ly influ­ence opin­ion and deci­sion mak­ing. If not, peo­ple are no longer rul­ing but rather the group con­trol­ling algo­rithms are those in pow­er, such as their devel­op­ers, oper­a­tors or data man­agers – as „algo­crats“. Anoth­er option is that the algo­crat­ic ten­den­cies assume small­er pro­por­tions than feared and peo­ple, in spite of demo­c­ra­t­ic deficits, keep their sov­er­eign­ty. I haven’t come to a final con­clu­sion on that so far.

Who or what legit­i­mates algo­crat­ic rule, that was the sub­ject of your third ques­tion. It was Max Weber again who for­mu­lat­ed impor­tant ideas con­cern­ing that issue by iden­ti­fy­ing pow­er as a very volatile, unsta­ble phe­nom­e­non. Thus, true rule requires sta­bil­is­ing the bal­ance of pow­er, rep­re­sent­ed by anoth­er attribute: legit­i­ma­cy. Legit­i­ma­cy means accep­tance, in oth­er words the jus­ti­fi­ca­tion of gov­ern­men­tal domin­ion, which aris­es from the con­vic­tion of those being ruled that it is legit­i­mate. In this con­text Weber set up three ide­alised forms of legit­i­mate rule. Approval of the per­form­ing of pow­er results either from a belief in con­for­mi­ty with a for­mal, nor­ma­tive set of rules and reg­u­la­tions (ratio­nal domin­ion), in the integri­ty of tra­di­tion­al, sacro­sanct hier­ar­chies (tra­di­tion­al domin­ion) or in the aura of the per­son in charge of pow­er (charis­mat­ic domin­ion) and the com­mand accept­ed on that basis. With regard to algo­c­ra­cy, I have near­ly run out of expla­na­tions once again. If peo­ple retain sov­er­eign­ty under algo­rith­mic influ­ence, then legit­i­ma­cy is ensured accord­ing to Arti­cle 20 (2) GG. But if algo­crat­ic vio­la­tions become appar­ent in the demo­c­ra­t­ic sys­tem, there is a need to clar­i­fy whether this new form of domin­ion is still legit­i­mate or whether it over­steps the bound­ary of ille­git­i­ma­cy. Going by Weber’s tril­o­gy, you could take ratio­nal rule into con­sid­er­a­tion, if algo­rithms were able to con­sis­tent­ly fol­low the legal sys­tem and its inher­ent val­ues – which they are in fact not capa­ble of because of a lack of tech­ni­cal log­ic. Alter­na­tive­ly, the legit­i­ma­cy of algo­c­ra­cy might come from charis­mat­ic domin­ion – if you assume that cit­i­zens trust the per­fec­tion of algo­rith­mic infor­ma­tion pro­cess­ing with­out ques­tion (automa­tion bias). In fact, it is not that easy because a pure­ly soci­o­log­i­cal term of legit­i­ma­cy can­not be applied to polit­i­cal and legal sci­ences that eas­i­ly. Actu­al approval isn’t enough. A nor­ma­tive aspect is nec­es­sary if domin­ion is to be legal­ly legit­i­mate; pub­lic recog­ni­tion must be pro­vid­ed by the law. That’s not the case with regard to blind trust in technology.

Your title sug­gests that there is a shift from democ­ra­cy to algo­c­ra­cy. We often hear in the media that democ­ra­cy is being “threat­ened” and indeed research has long shown that democ­ra­cy is in decline. Does algo­rith­mi­sa­tion – as a dis­rup­tive tech­nol­o­gy – endan­ger democ­ra­cy or are there oth­er rea­sons for this decline? Is it meth­ods like the echo cham­bers that shape the opin­ion of a group which form a “dom­i­nant” opin­ion and sift out “insignif­i­cant opinions”?

The fact that there is a con­stant trans­for­ma­tion between democ­ra­cy and algo­c­ra­cy is not new. Both of them are oppo­site ide­al types that don’t real­ly exist in that spe­cif­ic form. In every state, process­es of democ­ra­ti­sa­tion and auto­crati­sa­tion are con­stant­ly tak­ing place.

Cur­rent­ly many coun­tries are being seized by a so-called third wave of auto­crati­sa­tion. As we know democ­ra­cy has already been threat­ened in the past. What’s new about this threat is the vari­ance and simul­tane­ity of those threats. Democ­ra­cy is in the throes of a cri­sis of unprece­dent­ed extent, called poly­cri­sis which is occur­ring with increas­ing fre­quen­cy. At the turn of the mil­len­ni­um, more peo­ple lived in states with dete­ri­o­rat­ing rather than improv­ing demo­c­ra­t­ic con­di­tions. In the year 2020 only 4 % of the world pop­u­la­tion lives in the lat­ter situation.

The rea­sons for this are diverse and often rein­force each oth­er. Democ­ra­cy is endan­gered by glob­al­i­sa­tion and the accom­pa­ny­ing decline of states and par­lia­ments. Bound­aries become blurred, val­ues begin to sway and bond­ing forces that have kept soci­ety togeth­er in its pre­vi­ous shape fade. On top of that we can­not neglect the cli­mate cri­sis that shows demo­c­ra­t­ic states have thus far been inca­pable of act­ing and are seem­ing­ly help­less due to poor attempts at man­ag­ing the sit­u­a­tion and thus the gap in an already frag­ment­ed soci­ety has increased. Algo­rith­mi­sa­tion, the spread of algo­rith­mic appli­ca­tions, adds more fuel to that explo­sive mixture.

In this con­text, I dis­tin­guish between two areas: algo­rith­mi­sa­tion affects the estab­lish­ment and the exer­tion of demo­c­ra­t­ic rule – the estab­lish­ment you allud­ed to in your ques­tion. Democ­ra­cy is the domin­ion of the peo­ple. Peo­ple are the sov­er­eign, from whom all pow­er in a state must come from and to whom it must be attrib­ut­able, accord­ing to Arti­cle 20 (2) GG. That hap­pens by a process of objec­tive for­ma­tion, dur­ing the course of which a col­lec­tive will devel­ops out of the indi­vid­ual opin­ions that even­tu­al­ly become part of the state will. When algo­rithms are imple­ment­ed into that process and lim­it the set­ting of the for­ma­tion of a per­son­al opin­ion in the inter­net (where it hap­pens increas­ing­ly), the rep­u­ta­tion of the emerg­ing demo­c­ra­t­ic sys­tem is tar­nished from the very start. Major­i­ty and minor­i­ty opin­ions are nipped in the bud. The diverse range of opin­ions is both homogenised and polarised.

Democ­ra­cy, how­ev­er, lives from the very plu­ral­i­ty of a con­struc­tive polit­i­cal dis­course. Of course, we can­not just blame any iso­la­tion in imag­i­nary spaces on fil­ter algo­rithms and web track­ing by cook­ies. Humans nat­u­ral­ly tend to focus their atten­tion on infor­ma­tion that match their own view, and to ignore such infor­ma­tion that chal­lenges it (selec­tive expo­sure). Opin­ion-form­ing con­tent is designed to suit per­son­al pref­er­ences as well, but by nat­ur­al self-select­ed per­son­al­i­sa­tion in the echo cham­bers you men­tioned, not by algo­rith­mic pre-select­ed per­son­alised set­tings in fil­ter bub­bles. Tra­di­tion­al media like news­pa­per and tele­vi­sion have to fil­ter their con­tent, too. How­ev­er, the per­son­alised fil­ter­ing in the inter­net exceeds the human-dri­ven selec­tion, for exam­ple, in case of ampli­fi­er-algo­rithms that inten­tion­al­ly present eye-catch­ing news to pro­long the time spent on the website.

Algo­rithms are com­plex and, depend­ing on their char­ac­ter­is­tics, they lack trans­paren­cy and thus trace­abil­i­ty, and they take on – also autonomous­ly – an ever-increas­ing num­ber of human tasks. In the event of their “takeover of pow­er”, are they depen­dent on the legit­i­ma­cy of the peo­ple, mean­ing would they have to be elect­ed, because they can make deci­sions and quick­ly devel­op a basis for deci­sions based on the amount of data? Or are algo­rithms tools and will they remain tech­ni­cal tools that only require legal reg­u­la­tion such as the EU’s AI reg­u­la­tion which spec­i­fies what they are per­mit­ted and under what “con­di­tions”? If so, can the peo­ple devel­op guide­lines for deal­ing with these “intel­li­gent”, pos­si­bly even “more intel­li­gent” algorithms?

As I already men­tioned, when algo­rithms are employed by the state’s exec­u­tive, leg­isla­tive or judi­cial author­i­ty, they affect the exer­tion of demo­c­ra­t­ic rule. The cru­cial point con­cerns demo­c­ra­t­ic legit­i­ma­tion of sov­er­eign deci­sions, jus­ti­fi­ca­tion of the gov­ern­ment act­ing in spe­cif­ic sit­u­a­tions to exert their rule , which is an indis­pens­able con­di­tion for a democracy.

State author­i­ty which stems from the peo­ple must be account­able to them accord­ing to Arti­cle 20 (2) GG – please par­don my rep­e­ti­tion, but the prin­ci­ple of the people’s sov­er­eign­ty can’t be empha­sised often enough. The pre­vail­ing view dif­fer­en­ti­ates between two types of legit­i­ma­tions: 1) Per­son­al legit­i­ma­tion refers to the jus­ti­fi­ca­tion of the per­son decid­ing (Is the deci­sion to be made by this per­son?) while 2) fac­tu­al legit­i­ma­tion con­sid­ers the jus­ti­fi­ca­tion of the deci­sion on its mer­its (Should this deci­sion be made?). Observ­ing a cer­tain lev­el of legit­i­ma­tion is suf­fi­cient so that deficits of the one ele­ment can be bal­anced by the oth­er. A com­plete sub­sti­tu­tion is impossible.

A ques­tion­able issue is already there when we look at the qual­i­fi­ca­tion of the algo­rith­mic out­put as a deci­sion. Even the most intel­li­gent, entire­ly autonomous pro­gram is to be clas­si­fied as belong­ing to Weak AI and there­fore bound to the cod­ed require­ments. At any rate it should be clear to every­one that an algo­rithm can’t be elect­ed by itself because being elect­ed is a per­son­al deci­sion demand­ing an eli­gi­ble subject.

Nev­er­the­less, the out­put of an algo­rithm can have a con­trol­ling effect that moulds its deci­sion­al char­ac­ter and makes it a pow­er­ful instru­ment. It is to be not­ed that ful­ly auto­mat­ed deci­sions are sub­ject to strict lim­i­ta­tions. But semi-auto­mat­ed deci­sions can also have a con­sid­er­able impact on human behav­iour because of their sup­posed unim­peach­able sta­tus (automa­tion bias).

In this case, the account­abil­i­ty of the out­put exerts a pro­found influ­ence on demo­c­ra­t­ic legit­i­ma­tion, so that an algo­rith­mic deci­sion is still con­sid­ered to be part of the people’s will. If this becomes impos­si­ble because of the lack of trans­paren­cy of a “black box algo­rithm “, deficits in legit­i­ma­tion arise. They can affect per­son­al legit­i­ma­tion, when a deci­sion is attrib­uted to an offi­cial or insti­tu­tion in charge in a way that is unclear, or fac­tu­al legit­i­ma­tion, when the algo­rith­mic deci­sion-mak­ing cor­re­sponds not to the law, but to its own legal terms which are in some way hard to follow.

Whether and how far such legit­i­ma­tion deficits can be com­pen­sat­ed for is anoth­er dif­fi­cult ques­tion. Legal con­trol, such as the AI reg­u­la­tion of the Euro­pean Union, is vital for such a (high) risk tech­nol­o­gy in any case. How­ev­er, if AI is not banned com­plete­ly, the effec­tive­ness of the reg­u­la­tors depends on its tech­no­log­i­cal capa­bil­i­ty to x‑ray the black box. If sci­ence fails to do so, all require­ments on ver­i­fi­ca­tion, doc­u­men­ta­tion and soft­ware qual­i­ty assur­ance must remain paper tigers.

Many com­pa­nies try to direct­ly reach end con­sumers to increase sales. Algo­rithms also enable the state to get in “direct” con­tact with cit­i­zens and to take their wish­es into account – the best require­ment for a democ­ra­cy. Are algo­rith­mi­cal­ly gen­er­at­ed fil­ter bub­bles there­fore nec­es­sary for con­sen­sus build­ing, and, if so, why?

You are think­ing about algo­rith­mi­cal­ly dri­ven sup­port of direct demo­c­ra­t­ic ele­ments, I sup­pose. To be hon­est, I doubt that a direct democ­ra­cy real­ly is the “best“ form of democ­ra­cy. It sure­ly has its ben­e­fits as seen in Switzer­land or the ancient Attic democ­ra­cy which are often upheld as shin­ing exam­ples of this form of democ­ra­cy. But the good per­for­mance of those sys­tems is caused by his­toric, ter­ri­to­r­i­al or struc­tur­al fac­tors which aren’t found every­where, and cer­tain­ly not in Germany.

On the oth­er hand, direct demo­c­ra­t­ic ele­ments can help enhance imple­ment­ing the prin­ci­ple of the people’s sov­er­eign­ty qual­i­ta­tive­ly in a rep­re­sen­ta­tive democracy.

A delib­er­a­tive democ­ra­cy, which focus­es on pub­lic delib­er­a­tion in order to inte­grate cit­i­zens in deci­sion-mak­ing, is a good exam­ple for our dis­cus­sion. The more con­vinc­ing argu­ment in terms of its con­tent and not (only) the for­mal major­i­ty should deter­mine a deci­sion because by con­sid­er­ing all argu­ments an agree­ment on the “best“ option can then be reached – at least the­o­ret­i­cal­ly. Think­ing one step beyond, a delib­er­a­tive process could gain a lot from algo­rith­mic appli­ca­tions, for exam­ple, online fora or bots that eval­u­ate and process the range of opinions.

But I wouldn’t claim that fil­ter bub­bles are nec­es­sary for reach­ing a con­sen­sus. First of all, there is a dif­fer­ence between pri­vate com­pa­nies that fol­low eco­nom­ic cal­cu­la­tions and make a pre-selec­tion from the infor­ma­tion pre­sent­ed and non-com­mer­cial insti­tu­tions which put all the infor­ma­tion togeth­er for delib­er­a­tive pur­pos­es. Sec­ond­ly, any con­sen­sus depends on a giv­en quo­rum. Assum­ing una­nim­i­ty, a high­ly delib­er­a­tive pro­ce­dure might be suit­able to achieve and not to block a demo­c­ra­t­ic process.

When a sim­ple or qual­i­fied major­i­ty is suf­fi­cient to make a deci­sion, such as in the Ger­man legal sys­tem, it would be inap­pro­pri­ate to try and con­ceive of a deci­sion process where all par­tic­i­pants must share the result. An estab­lished rep­re­sen­ta­tive democ­ra­cy accepts that there will always be a minor­i­ty with a dif­fer­ent view and pro­vides rights and priv­i­leges for its pro­tec­tion. Thus, a mas­sive exten­sion of direct demo­c­ra­t­ic ele­ments dri­ven by algo­rithms is not real­ly nec­es­sary at this time.

We are cur­rent­ly con­front­ed with many crises; you your­self have used the key­word “poly­cri­sis”. Let’s look at the envi­ron­men­tal cri­sis. Some claim that cap­i­tal­ism will shrink at this time due to a lack of fos­sil fuels so that we can­not deal with the envi­ron­men­tal cri­sis with­out chaos ensu­ing. They even go so far as to assert that a demo­c­ra­t­i­cal­ly planned econ­o­my is required for the dis­tri­b­u­tion and rationing of scarce resources (e.g. drink­ing water, ener­gy). Are algo­rithms suit­able for a “demo­c­ra­t­i­cal­ly” planned econ­o­my because they can over­see and analyse the com­plex sit­u­a­tions more quick­ly and per­haps make “right/fair” deci­sions regard­ing dis­tri­b­u­tion and rationing?

The econ­o­my has to be redesigned in order to suc­cess­ful­ly deal with cli­mate change. Whether that means reject­ing cap­i­tal­ism, I can­not say as I am no economist.

One thing is cer­tain – if we want to stick to cap­i­tal­ism, we must reform it, at least in the places where it hin­ders cli­mate pro­tec­tion and cop­ing with its con­se­quences. Espe­cial­ly in this con­text, (more or less) intel­li­gent, fast, (seem­ing­ly) neu­tral and fault­less algo­rithms seem to an obvi­ous choice. In mod­er­ate forms they are already being used for eco­log­i­cal pur­pos­es, for exam­ple in “Smart Farm­ing“ in agri­cul­ture, when field robots are taught via machine learn­ing where to weed, fer­tilise or sow. On a larg­er scale, parcels and machines are inter­con­nect­ed and exter­nal data sources about weath­er and topol­o­gy are inte­grat­ed to opti­mise sow­ing and har­vest­ing strate­gies. With “Smart Meter­ing“ in ener­gy indus­tries intel­li­gent mea­sure­ment sys­tems con­tribute to coor­di­nat­ing the gen­er­a­tion and con­sump­tion of ener­gy and there­fore to improv­ing ener­gy management.

A far more rad­i­cal step would be a trans­for­ma­tion of the free mar­ket econ­o­my into a planned econ­o­my, where the scarci­ty in resources would force state inter­ven­tion. This involve­ment could start with local bot­tle­necks, and then extend to region­al, nation­al or even glob­al levels.

If this path were cho­sen by pol­i­tics, I would sup­port using algo­rithms to solve dis­tri­b­u­tion issues. These depend on more vari­ables than one per­son or even whole task forces could ever con­sid­er, let alone eval­u­ate cor­rect­ly. Big Data, gigan­tic data stocks lying more or less organ­ised in data ware­hous­es, data lakes or data swamps, and their pro­cess­ing meth­ods might pro­vide the nec­es­sary due diligence.

Oth­ers rely on human inno­v­a­tive pow­er to over­come crises. Will algo­rithms find faster and bet­ter solu­tions than humans and are they there­fore the solu­tion to being cli­mate-neu­tral by 2045 in accor­dance with legal require­ments, despite a pos­si­bly long mar­ket launch due to test­ing, stud­ies etc.?

I think we have to aban­don the idea that we are able to cope with the many crises which we caused by only using our nat­ur­al cog­ni­tive resources. By using algo­rithms to con­nect, com­mu­ni­cate and trans­act glob­al­ly, mov­ing huge amounts of data while doing so – in 2020 the esti­mat­ed glob­al data vol­ume came to 50 zettabytes – in frac­tions of a sec­ond, we have opened the prover­bial Pandora’s box and relin­quished our hold on technology.

Dig­i­tal dimen­sions know nei­ther space nor time, at least not as we per­ceive it, and have devel­oped process­es we can’t inter­vene with, even though we per­suade our­selves into think­ing so. We live in a VUCA world, the char­ac­ter­is­tics of which, such as volatil­i­ty, uncer­tain­ty, com­plex­i­ty and ambi­gu­i­ty, are reflect­ed in every major cri­sis of our time. Instru­men­tal­is­ing algo­rithms to nav­i­gate this world is an ardu­ous task, but I don’t see any alter­na­tive. Even if we don’t reach cli­mate neu­tral­i­ty in 2045 by using algo­rithms because reforms and inno­va­tion in pol­i­tics and sci­ence are being blocked, with­out them we are sure­ly nev­er going to achieve that goal.

Mr Hans-Jür­gen Jakobs said that the new “ism” was monop­oly. Prof Dr Schupp advo­cates an uncon­di­tion­al basic income, while oth­ers advo­cate the shrink­ing of cap­i­tal­ism, and you are inves­ti­gat­ing whether algo­c­ra­cy will soon gain strength. What will our soci­ety look like in the future? Is an algo­c­ra­cy the desired sys­tem change that will guar­an­tee suc­cess in over­com­ing these var­i­ous crises?

I am afraid nei­ther Mr Jakobs, Prof Dr Schupp nor I can offer a fool-proof recipe for suc­cess here. There is not just one switch that changes every­thing for the bet­ter. The thought is appeal­ing, but utopi­an, unfortunately.

Just as the poly­cri­sis con­sists of sev­er­al sin­gle crises – cli­mate cri­sis, democ­ra­cy cri­sis (because of algo­crat­ic ten­den­cies among oth­er things), eco­nom­ic cri­sis, secu­ri­ty cri­sis – cop­ing with the poly­cri­sis has to pur­sue a mul­ti­modal strat­e­gy. Iso­lat­ed attempts to res­cue the cli­mate, to rec­on­cile all nations for world peace or to restore the integri­ty of democ­ra­cy, are the first steps in the right direc­tion. But with­out going hand in hand, they won’t achieve a sus­tain­able effect.

Ms Borchert, thank you for shar­ing your insights on the dif­fer­ent aspects regard­ing algocracy.

Thank you, Dr Cal­daro­la, and I look for­ward to read­ing your upcom­ing inter­views with recog­nised experts, delv­ing even deep­er into this fas­ci­nat­ing topic.

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Sophie Borchert

Sophie Borchert studied Law at the University of Augsburg, Germany. Having passed her first state examination in 2020, she has been completing a doctorate which focusses on public and European law, environmental and planning law under the supervision of Prof. Dr Martin Kment. Inspired by the joint publication of their book „Künstliche Intelligenz und Algorithmen in der Rechtsanwendung“ (Artificial Intelligence and algorithms in the application of law). Ms Borchert developed the idea for her PhD thesis “Von der Demokratie zur Algokratie? Der Einfluss von Algorithmen auf die Herrschaft des Volkes“ (From democracy to algocracy? The effect of algorithms on the rule of the people). At the same time, she conducts research in cooperation with Professor Dr Kment in the field of environmental law and energy transition, from which further publications have come (“Intertemporalität in der Energiewende. Neukonstruktion des Umweltrechts unter verfassungsrechtlichem Einfluss“ (Kment/Borchert, AöR 147 (2022), 582-647; translated: “Intertemporality in energy transition. A new design of the environmental law under constitutional influence” (own translation)); and, “Wie viel Beschleunigung verträgt die Rechtsstaatlichkeit? – Die Zulassung ‚vorvorzeitigen‘ Beginns gem. § 31e BImSchG in der kritischen Analyse” (Kment/Borchert, NVwZ 2023 (publication in progress); translated: “How much acceleration does the rule of law tolerate? – The admission of ‘pre-premature’ commencement according to § 31e BImSchG in critical analysis” (own trabslation)).

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.