Big Data and the Con­cept of Responsibility

B
Daniel Goeude­vert – Pho­to: Hart­mut Müller-Stauffenberg

While the indus­tri­al rev­o­lu­tion ini­ti­at­ed the process of automat­ing pro­duc­tion lines, dig­i­tal­i­sa­tion and the con­comi­tant appear­ance of Big Data, Indus­try 4.0, AI, and so forth will prob­a­bly com­plete it. In your book – Das Seerosen­prinzip1 – you have described in great detail how indi­vid­ual respon­si­bil­i­ty is dis­ap­pear­ing with the advent of ever-increas­ing mech­a­ni­sa­tion. Should we wor­ry that with greater automa­tion via the dig­i­tal rev­o­lu­tion respon­si­bil­i­ty will van­ish com­plete­ly or per­haps even be hand­ed over to robots?

In her Duet inter­view with Daniel Goeude­vert, the French man of let­ters, auto­mo­bile indus­try man­ag­er and busi­ness con­sul­tant, Dr Cal­daro­la, author of the recent book Big Data and Law, dis­cuss­es the mean­ing of respon­si­bil­i­ty in the dig­i­tal age. 

For the sake of our read­ers, let us go over some of the key take­aways from your book, Das Seerosen­prinzip.  With the advent of auto­mat­ed assem­bly lines, man­u­fac­tur­ing a prod­uct was divid­ed into numer­ous pro­cess­ing stages to increase effi­cien­cy (Tay­lorism). For this rea­son, the work­er was respon­si­ble for his/her par­tic­u­lar step in pro­duc­tion, but not for the entire prod­uct. It fol­lows then that s/he could and can no longer have an overview of the entire pro­duc­tion process begin­ning with pur­chas­ing the raw mate­ri­als, pro­cess­ing of indi­vid­ual ele­ments to sell­ing the final prod­uct because of the numer­ous stages involved. This nat­u­ral­ly means that s/he only takes respon­si­bil­i­ty for what was done in his/her step, but not for man­u­fac­tur­ing the whole prod­uct. The work­er in ques­tion is then quite dif­fer­ent from being part of a small arti­sanal work­shop where the prod­uct in ques­tion is made by one per­son con­sid­er­ing all aspects from pur­chas­ing the mate­ri­als need­ed to the design, exe­cu­tion and sale of the prod­uct. You con­clude that with the advent of mech­a­ni­sa­tion, indi­vid­ual respon­si­bil­i­ty is dis­ap­pear­ing. Fur­ther­more, you ask who bears the over­all respon­si­bil­i­ty with regard to pro­duc­tion con­cern­ing issues such as envi­ron­men­tal pro­tec­tion, “fair trade” in the deliv­ery chains, and treat­ing employ­ees, sup­pli­ers, cus­tomers and so on with the respect and fair­ness which they deserve.  How do you view respon­si­bil­i­ty and what does it mean for you? Who do you think is respon­si­ble for these “high­er” issues and can we ensure that some­one takes respon­si­bil­i­ty for mat­ters which serve us all?

Daniel Goede­v­ert: When I think of the word respon­si­bil­i­ty, or the expres­sion to bear respon­si­bil­i­ty, then what comes to mind is being respon­si­ble for a pro­ce­dure, a process, an event or even for a person.

Respon­si­bil­i­ty always has a begin­ning and an end.

The result of the pro­ce­dure, for which a per­son has tak­en respon­si­bil­i­ty, can be a pos­i­tive one, and the per­son respon­si­ble is hap­py to take the cred­it for it.  The pro­ce­dure can, how­ev­er, go bad­ly and that same per­son has to accept respon­si­bil­i­ty for that result as well.

Respon­si­bil­i­ty is always linked to many expec­ta­tions, such as, for exam­ple, what posi­tion you have, pow­er, knowl­edge, income, influ­ence among oth­er things. These expec­ta­tions are will­ing­ly accept­ed by the per­son respon­si­ble. There is, how­ev­er, a down­side to respon­si­bil­i­ty: the con­se­quences should the process fail are just as glad­ly dis­avowed and hand­ed over to some­one else. It’s a com­plex issue.

Let us first con­sid­er respon­si­bil­i­ty with regard to tech­nol­o­gy because that is what Big Data is all about.

When con­sid­er­ing your ques­tion con­cern­ing tech­nol­o­gy, respon­si­bil­i­ty and free­dom (to decide), Mar­tin Hei­deg­ger comes to mind. No oth­er philoso­pher has con­sid­ered the rela­tion­ship between tech­nol­o­gy and humans so mas­ter­ful­ly as Hei­deg­ger did.

Rough­ly eighty years ago he had already pre­dict­ed what would tran­spire today. I can remem­ber one of his state­ments where he described the dan­ger of the devel­op­ing tech­nol­o­gy was one of util­i­ty becom­ing “over-util­i­ty” and tech­nol­o­gy hav­ing its own lack of pur­pose as its very purpose.

It is this state­ment which is cru­cial to our dis­cus­sion. This lack of pur­pose of tech­nol­o­gy, or stat­ed dif­fer­ent­ly, the momen­tum which tech­nol­o­gy gen­er­ates on its own can only be con­trolled to a cer­tain extent. This real­i­sa­tion is essen­tial to our lat­er under­stand­ing of respon­si­bil­i­ty and freedom.

The tech­nol­o­gy of today has been refined and devel­oped to such an extent so that it can deter­mine its own future. We can see this being played out in the advanced tech­nol­o­gy of Arti­fi­cial Intel­li­gence (AI), androids and robots, all of which can be designed to evolve fur­ther in its shape and think­ing process­es.  Think­ing is of course to be put in quo­ta­tion marks here, since I do not believe that peo­ple can make anoth­er “being” think by means of tech­nol­o­gy or teach this being or robot what respon­si­bil­i­ty, free­dom or oth­er “soft” ele­ments mean.

The issue of lack of pur­pose is impor­tant. Already in the 80s and 90s I had the feel­ing that the auto­mo­bile was no longer ful­fill­ing its orig­i­nal pur­pose, name­ly enabling the mobil­i­ty of peo­ple. Instead, it was being used to nour­ish people’s van­i­ty and desire for speed. In Heidegger’s terms, tech­nol­o­gy (the car) had pro­gressed fur­ther on its own and was no longer true to its inten­tion and nature, mean­ing the need peo­ple have to be mobile.

I fear peo­ple are sus­cep­ti­ble to want­i­ng to own a car or oth­er prod­uct to por­tray them­selves in a cer­tain way and, there­fore, the orig­i­nal intent of the prod­uct, enabling mobil­i­ty, is lit­tle by lit­tle being lost and serv­ing anoth­er area: self-pro­mo­tion and ownership.

It is undoubt­ed­ly good to own a car. It of course has advan­tages and dis­ad­van­tages. But nowa­days own­er­ship only has one advan­tage: pro­ject­ing what I am or what I think I would like to be. Nat­u­ral­ly this is true of oth­er prod­ucts as well, such as for mobile phones which have turned into sta­tus symbols.

By anal­o­gy, we can use this way of think­ing with ref­er­ence to the per­son respon­si­ble, such as the boss­es of a com­pa­ny. They are also less con­cerned about their respon­si­bil­i­ty towards soci­ety, the envi­ron­ment, Fair Trade and employ­ees but rather about advanc­ing their own career, rais­ing their salaries and increas­ing their influ­ence and power.

Hav­ing deter­mined what the aims are, we can then ques­tion what free­dom and respon­si­bil­i­ty are. If I remain true to the orig­i­nal pur­pose – in this case, the goal of becom­ing mobile or the aim of ensur­ing the well-being of my co-work­ers, then I can prof­it from and enjoy a feel­ing of freedom.

If, how­ev­er, I con­sid­er areas involv­ing own­er­ship or self-inter­est, then I find myself in a com­plete­ly dif­fer­ent region, one of depen­den­cy, to wit: own­ing mon­ey and prop­er­ty. It is true that I am respon­si­ble for the prod­uct that I own, in this case, an auto­mo­bile. But here we must dis­tin­guish between the ethics of con­vic­tion and and the ethics of respon­si­bil­i­ty in accor­dance with the phi­los­o­phy of Max Weber. At this point, the focus must be on the rea­sons why I own a cer­tain thing and the respon­si­bil­i­ty asso­ci­at­ed with these reasons.

Need and expec­ta­tion are then defined in accor­dance with the pur­pose in question.

When a com­pa­ny wish­es to intro­duce an envi­ron­men­tal­ly friend­ly prod­uct on the mar­ket, because oil is becom­ing scarce, and the com­pa­ny wish­es to safe­guard non-renew­able ener­gy sources, then we can say that a spe­cif­ic need is being addressed. The com­pa­ny must then walk a fine line between offer­ing said prod­uct at a price that is nei­ther too high nor too low. After all, a rise in prices is to be avoid­ed because of dwin­dling oil reserves.

Address­ing expec­ta­tions is an entire­ly dif­fer­ent mat­ter., Here the com­pa­ny does not want to sat­is­fy a par­tic­u­lar need. In this case, it is all about the size and per­for­mance of a prod­uct: 6 cylin­ders are more than 4 and 8 are more than 6. If a com­pa­ny builds an 8‑cylinder vehi­cle instead of a 6‑cylinder, thus enabling the car to go 250 instead of 200 kph, then it has an edge over the com­pe­ti­tion. Accord­ing­ly, cus­tomers will pre­fer the faster car. From this per­spec­tive, the basest expec­ta­tions of cus­tomers are met and not the noblest: The cus­tomer wants to show off, wants to dri­ve fast and be a macho type etc. And address­ing these expec­ta­tions works because they are so simple.

One only has to fig­ure out what the sec­ond-best on the mar­ket is up to. If he builds a car that weighs so much, dri­ves so fast and costs so much, then the com­pa­ny in ques­tion need only add a plus to all these cri­te­ria. With that quick fix, we now have a prod­uct that offers more than the sec­ond-best on the mar­ket. It all works fol­low­ing the Max Planck prin­ci­ple: “Every­thing that can be mea­sured is real”. These expec­ta­tions can be pro­voked by foment­ing and ful­fill­ing the expec­ta­tion in ques­tion: This car can attain x speed and will be ful­filled by sale of the product.

Those respon­si­ble will behave cor­re­spond­ing­ly: If s/he is only con­cerned about his/her career and income, then here too one would have to find out who is sec­ond on the market.

There is a fine line between what is being adver­tised and what is actu­al­ly being offered. I get the poten­tial cus­tomer excit­ed and do this to sat­is­fy his/her expec­ta­tions. That is not allowed and is sim­ply not done with regard to what is need­ed. When I play around with expec­ta­tions, what hap­pens is exact­ly what com­mu­ni­ca­tions expert Edward Bernays had done. The pur­pose changes and tech­nol­o­gy expe­ri­ences self-devel­op­ment with­out hav­ing a purpose.

The most impor­tant fac­tor is not the prod­uct but rather its func­tion or the need which has to be ful­filled. Big Data, dig­i­tal­i­sa­tion, as well as the many oth­er tech­nolo­gies are mere­ly a means to an end. Every­one is talk­ing about the aim of dig­i­tal­i­sa­tion being how fast process­es are occur­ring and how quick­ly they are spread­ing. For the most part, how­ev­er, no one men­tions what the spe­cif­ic need or pur­pose is.

Indeed, going back to Hei­deg­ger, it seems clear that the self-pro­gres­sion of tech­nol­o­gy, the speed with which we are being over­whelmed and its lack of pur­pose can no longer be controlled.

Either dig­i­tal­i­sa­tion will soon be upon us or we are right in the mid­dle of it, mean­ing we are one step clos­er to total automa­tion. Peo­ple rely more and more on the require­ments set by pro­grams and machines and are los­ing their abil­i­ties to do var­i­ous things. For exam­ple, peo­ple have for­got­ten how to do their sums because they have a cal­cu­la­tor. Sim­i­lar­ly, we will soon lose the abil­i­ty to read a map because we rely increas­ing­ly on nav­i­ga­tion sys­tems, and nobody will both­er learn­ing this type of skill either. Machines are sup­posed to be learn­ing how to man­age more and more tasks, either in the form of AI, algo­rithms, robot­ics or sim­i­lar, and will thus take them over from peo­ple and con­tin­ue to learn inde­pen­dent­ly. Do these machines, pro­grams, robots etc. not make any mis­takes? And if they do, who is respon­si­ble for these errors, because respon­si­bil­i­ty has always been attached to the sub­ject and not to the object. In oth­er words, with the pro­gres­sion of dig­i­tal­i­sa­tion, will respon­si­bil­i­ty van­ish com­plete­ly or become redun­dant because no or hard­ly any actu­al peo­ple will still be involved in pro­duc­tion processes?

Will respon­si­bil­i­ty dis­ap­pear with increas­ing dig­i­tal­i­sa­tion? The answer is NO- even if at the end of a ful­ly auto­mat­ed process only one per­son is left stand­ing. That per­son, even if s/he is called God, will be responsible.

Of course, the per­son respon­si­ble will look for his own form of Satan, as in all monothe­is­tic reli­gions, such as Chris­tian­i­ty, Judaism and Islam, to be able to shift the blame for his/her fail­ure to this “Satan”. Those who wrote the Bible or oth­er reli­gious texts also con­sid­ered this issue, for if only one God, or pos­si­bly in the future a human, exists, then this one being can­not be respon­si­ble for Good and Evil. Hence, this divine being requires an oppo­nent, if you will, Satan, who has always exist­ed and has always been respon­si­ble for Evil.

A dual sys­tem, a head of Janus, has always been need­ed: one side rep­re­sent­ing Good and the oth­er Evil, thus pro­vid­ing us with a being who is respon­si­ble for both Good and Evil. So long as many divini­ties exist­ed, each of whom was assigned a vari­ety of tasks, then Good and Evil could be distributed.

How­ev­er, a pre­req­ui­site for respon­si­bil­i­ty is the belief in val­ues: Good, Evil, polit­i­cal or human val­ues among oth­ers. Peo­ple have always been faced with the dilem­ma of hav­ing to choose between Good or Evil. Of course, whether machines, tech­nol­o­gy, Big Data, algo­rithms etc. rep­re­sent Satan is an entire­ly dif­fer­ent matter.

Every process or oper­a­tion has a begin­ning and an end­ing. The process is start­ed  by an ini­tia­tor, who gets the ball the rolling, and, at the end, we have some­one respon­si­ble for the process, who places the prod­uct cre­at­ed by this process on the market.

Today, as before, it was always dif­fi­cult for the per­son at the head of a com­pa­ny to have a detailed overview of the process­es involved in pro­duc­tion. S/he can and always could say or find out what had tran­spired dur­ing the course of a cer­tain process and could thus deter­mine who had car­ried it out or who had done some­thing wrong. The per­son per­form­ing cer­tain tasks dur­ing the process in ques­tion is the one who caused the event in ques­tion while the per­son in charge was and still is the per­son responsible.

In an era of Big Data and dig­i­tal­i­sa­tion, that per­son can no longer be said to be respon­si­ble with as much cer­tain­ty. There might be a process which func­tions with machines, algo­rithms, AI, neu­ronal net­works and, of course, it is obvi­ous that the per­son in charge could no longer have a real overview of such a process.  Indeed s/he is more like­ly to be wait­ing for a final result. The boss of the com­pa­ny can bear respon­si­bil­i­ty, if s/he says, I want to achieve a cer­tain result, even if s/he can­not make any­one respon­si­ble for an inter­me­di­ate process.

If you view the top­ic of “Big Data” from this angle, it is para­dox­i­cal but true: It becomes eas­i­er to deter­mine who is respon­si­ble for it. It will of course be pos­si­ble to analyse the inter­me­di­ate steps of the process in ques­tion owing to auto­mat­ed doc­u­men­ta­tion. Para­dox­i­cal­ly, it will no longer be pos­si­ble to iden­ti­fy a sub­ject since the process will have been dri­ven by machines which is to say objects. It is like a black hole where no one can real­ly find an initiator.

To sum­marise, the head of a com­pa­ny no longer- and far less than before- needs to have an overview of the work process­es in order to bear responsibility.

For this rea­son, deter­min­ing respon­si­bil­i­ty will work in this way: Who began the process and who put the prod­uct on the mar­ket. And if I con­tin­ue to the log­i­cal con­clu­sion of this sce­nario, then man­agers of com­pa­nies have more respon­si­bil­i­ty than ever- espe­cial­ly if it is no longer pos­si­ble to find a sub­ject for the inter­me­di­ate steps. The per­son respon­si­ble can still say at the end of the process: I am not sat­is­fied with the result and will not intro­duce it on the mar­ket, even if doing so means pos­si­bly los­ing his/her job. So, in this sce­nario, we can imag­ine the chal­lenge of the head of a com­pa­ny only view­ing his duty as sat­is­fy­ing share­hold­ers. Thus, greed, extrem­ism, bias, intem­per­ance, com­pe­ti­tion, max­imi­sa­tion etc. will all become increas­ing­ly sig­nif­i­cant. In this case, we would need moral­i­ty as a cor­rec­tive mea­sure to pre­vent eco­nom­ics and sci­ence from derailing.

Is automa­tion and now dig­i­tal­i­sa­tion lead­ing to a loss of capa­bil­i­ty, think­ing and feel­ing, which is to say to a loss of inno­va­tion and val­ues, see­ing as machines do not have morality?

Yes. Here the key phrase of our con­ver­sa­tion „Big Data “comes into play: A mod­ern form of tech­nol­o­gy that mul­ti­plies information.

Does this make the sit­u­a­tion bet­ter or worse or mere­ly change it? Prob­a­bly it will dete­ri­o­rate it, and you are right to ask this ques­tion since Big Data will have an influ­ence and will bring about change – prob­a­bly for the worse.

I am remind­ed of T.S. Eliot’s thoughts con­cern­ing information:

 

„Where is the Life we have lost in liv­ing? Where is the Wis­dom we have lost in knowl­edge? Where is the Knowl­edge we have lost in information?

 

That says every­thing and I would also add the following:

 

“Where is the Infor­ma­tion we have lost in social net­works und where is the sense of respon­si­bil­i­ty we have lost in Big Data?

Big Data will prob­a­bly make every­thing worse because the pro­cess­ing of data and infor­ma­tion is not trans­par­ent and can­not be under­stood by peo­ple, even if com­pa­nies are legal­ly oblig­ed to pro­vide us with incred­i­bly long data pro­tec­tion notices. The pro­cess­ing, stor­age and analy­sis are hid­den in gigan­tic devices, algo­rithms, plans, com­put­ers, apps etc. so that peo­ple do not have an overview or insight in these process­es, even if every sin­gle stage of the co-work­er or cus­tomer is auto­mat­i­cal­ly being reg­is­tered or doc­u­ment­ed via machines or its sen­sors in this era of indus­try 4.0 or online shopping.

Peo­ple see input and out­put so quick­ly that no one is capa­ble of under­stand­ing the inter­me­di­ate pro­cess­ing as quick­ly or gain­ing an overview of the entire process. I can only hope that this does not break free at some point, for, if it does, we will sure­ly lose com­plete control.

Machines, algo­rithms, Big Data and so on are all not moral beings. They do not invent them­selves of their own voli­tion, mean­ing they do not have the capac­i­ty to invent. At the moment, they can only improve them­selves inde­pen­dent­ly. It is impor­tant in this con­text to stress that they do not deter­mine what the goal is and do not ini­ti­ate any goals. Rather, they only opti­mise the process­es in question.

There will, there­fore, always be a pro­gram­mer or a per­son or per­sons who assume there is a goal when they cre­ate and plan an entire devel­op­ment by using Big Data and oth­er tech­nolo­gies. There will always be peo­ple to ini­ti­ate achiev­ing a cer­tain aim, even if there is only one per­son left owing to the high lev­el of automation.

It is like in that won­der­ful book by Aldous Hux­ley, Brave New World: Peo­ple were hap­py because they were con­stant­ly tak­ing the drug Soma.

The Soma pills are a metaphor for adver­tis­ing and social media of today: a world in which peo­ple are made sub­mis­sive and needy in accor­dance with the the­o­ry of Edward Bernay by prey­ing upon their expec­ta­tions. In this way, it is con­veyed to peo­ple which prod­ucts they think they need. Peo­ple, in this case the users of Face­book, Apple, Ama­zon etc. believe them­selves to be free and free to decide but, in real­i­ty, they are depen­dent on the drug, which is in this sce­nario, social media. The Alpha Plus peo­ple from Huxley’s work are the pro­gram­mers or the boss­es of the var­i­ous social medias.

The num­ber of con­comi­tant con­se­quences is enor­mous. If a hack­er, or let it be a pro­gram­mer, and fol­low­ing Hux­ley, an Alpha Plus per­son, sud­den­ly detects a small gap in the sys­tem, then the effect takes on a total­ly dif­fer­ent dimen­sion com­pared to before.  This per­son can be there with­out being detect­ed and is capa­ble of shut­ting down an entire hos­pi­tal in Düs­sel­dorf or Munich, as recent­ly seen in the news. And the sit­u­a­tion can get much worse if this were to become the war of the future.

We con­sid­er our­selves to be well informed owing to analy­ses derived from Big Data or the social media but too much infor­ma­tion kills infor­ma­tion. We think we are not only well informed, but also keep our­selves up to date quick­ly and all the time because we use the inter­net, with­out real­iz­ing, how­ev­er, that we have start­ed to not use our own judge­ment. Peo­ple con­fuse the amount of infor­ma­tion with the abil­i­ty to pass judge­ment on things, peo­ple or even a giv­en state of affairs.

I think your ques­tions are very time­ly. Par­tic­u­lar­ly now dur­ing the COVID-19 pan­dem­ic, we see how this sit­u­a­tion has come to pass: Every­one is pro­vid­ing infor­ma­tion, com­men­taries and opin­ions con­cern­ing this mat­ter. I have stopped fol­low­ing what the sci­en­tists have had to say long ago, be it the virol­o­gists, epi­demi­ol­o­gists, physi­cians, emer­gency doc­tors etc. Each one has some­thing dif­fer­ent to say, each con­tra­dicts the oth­er and they all think they know some­thing about the sub­ject. Fur­ther­more, the politi­cians think they know as much as the sci­en­tists do.

For this rea­son, the film “Forschung, Fake und faule Tricks” (Research, Fake and rot­ten tricks) or “La fab­rique de l’ignorance“ (the fac­to­ry of igno­rance) was shown on Arte at just the right time. In the movie, the top­ic of Agno­tol­ogy was dis­cussed, the doc­trine of not know­ing or of igno­rance and how it is often being taught at uni­ver­si­ty as a new dis­ci­pline. There is a whole indus­try of sci­ence out there ded­i­cat­ed to spread­ing doubt con­cern­ing research results because they con­tra­dict com­mer­cial inter­ests. There are even renowned Nobel prize win­ners who sup­port cer­tain direc­tions or prod­ucts because they are being lured in with research funds. There are lob­by­ists of estab­lished prod­ucts who pro­tect their sales mar­kets and hin­der new dis­rup­tive tech­nolo­gies and prod­ucts to attain success.

We can observe how this tremen­dous amount of infor­ma­tion is trans­mit­ted to an aver­age per­son, and we are all aver­age peo­ple in this regard. Infor­ma­tion in these dimen­sions reach­es even an aver­age sci­en­tist. Indeed, an aver­age sci­en­tist no longer knows what s/he is say­ing and what s/he has to say because so many research results are being car­ried out, pub­li­cised and dis­sem­i­nat­ed which oppose one anoth­er or are contradictory.

Peo­ple in posi­tions of pow­er used to deter­mine goals, whether in sci­ence, the econ­o­my or else­where. Nowa­days the econ­o­my and research devel­op on their own with­out any catalyst.

Sci­ence can be divid­ed into pseu­do-sci­ence, junk sci­ence and real sci­ence. Yet every­thing is become less com­pre­hen­si­ble owing to the repro­duc­tion of an infor­ma­tion­al soci­ety, and it would be a coin­ci­dence if sud­den­ly some­one had a gen­uine moment of illumination.

If we all thought we knew every­thing sim­ply because we had access to the inter­net, then this would be both a won­der­ful and a dis­as­trous sit­u­a­tion. We can see it in social media where each per­son think s/he can make the big bucks. In this area, it must be said that Mam­mon, mon­ey, prof­it and greed, has won.

It is amaz­ing how influ­encers are able to col­lect mil­lions of fol­low­ers and make lots of mon­ey in a short amount of time sim­ply because they sud­den­ly post some­thing and say some­thing which is basi­cal­ly unin­ter­est­ing: I bought this item and I liked that one. Sud­den­ly they have a lot of fol­low­ers and are bil­lion­aires. One of the most bizarre devel­op­ments of this nature which occurred in the past few years is the Kar­dashi­an fam­i­ly, who haven’t real­ly accom­plished any­thing but have some­how earned billions.

A per­son can say some­thing on the inter­net which can reach thou­sands of peo­ple and yet simul­ta­ne­ous­ly hun­dreds of thou­sands are able to con­tra­dict him via Twit­ter or Face­book. How are we sup­posed to form our own opin­ion these days? Where is the wis­dom we have lost in knowl­edge? Where is the knowl­edge we have lost in infor­ma­tion? Truer words have nev­er been spoken.

Algo­rithms, AI and sim­i­lar are all capa­ble of find­ing cor­re­la­tions or the best, fastest, most durable or cheap­est etc. In the end, are we only talk­ing about quan­ti­ta­tive para­me­ters? What about qual­i­ta­tive aspects, ones that can­not be mea­sured but which ren­der life and labour human? In oth­er words, are greed, bias, immod­er­a­tion, igno­rance, rival­ry, max­imi­sa­tion among oth­ers being pro­mot­ed because of dig­i­tal instruments?

Cor­rect. Eval­u­at­ing data pro­duces quan­ti­ta­tive results while a per­son has the abil­i­ty to yield both quan­ti­ta­tive and qual­i­ta­tive aspects.

Peo­ple believe in val­ues such as reli­gion, the Ten Com­man­dants, the basic rights of 1948, the ethics of Kant and Kierkegaard, but, regard­less of the val­ues, it would be inter­est­ing to know if a machine could detect moral guid­ance or val­ues if it read Kant or Kierkegaard.

You are cor­rect when you state that quan­ti­ta­tive para­me­ters and thus baser feel­ings are being pro­mot­ed. One of the best thoughts that I have ever encoun­tered comes from John Steinbeck:

“…The things we admire in men, kind­ness and gen­eros­i­ty, open­ness, hon­esty, under­stand­ing and feel­ing, are the con­comi­tants of fail­ure in our sys­tem. And those traits we detest, sharp­ness, greed, acquis­i­tive­ness, mean­ness, ego­tism and self-inter­est, are the traits of suc­cess. And while men admire the qual­i­ty of the first, they love the pro­duce of the sec­ond.” ― John Stein­beck

That is such a true obser­va­tion of human nature and, at the same time, answers one of your ques­tions: Where is good­ness in all of this, where is respon­si­bil­i­ty and why do ten­der emo­tions get the short end of the stick when it comes to how we act?

Yes, I fear every­thing will become worse and one thing more must be said today: The so-called Mam­mon – mon­ey, prof­it, and greed – has def­i­nite­ly got­ten the upper hand over all oth­er gen­tle emo­tions or „soft fac­tors“. If we look for the lat­ter when con­sid­er­ing the per­son respon­si­ble, be it respon­si­ble for devel­op­ing a par­tic­u­lar type of tech­nol­o­gy or tech­nique, or the pro­pri­etor or own­er of a prod­uct, then these soft emo­tions, aspects or qual­i­ties can either not be or hard­ly be found.

The rea­son is that the peo­ple who are respon­si­ble for devel­op­ing a prod­uct or man­ag­ing a com­pa­ny only have one goal in mind: increas­ing the cap­i­tal of a com­pa­ny or their own wages and sat­is­fy­ing their own share­hold­ers, all of which can­not hap­pen with­out profit.

This is also reflect­ed by state­ments made by banks who stat­ed about thir­ty years ago that they were only inter­est­ed in share­hold­er val­ue and a 24% net gain, but not the rest. This “rest” is just waste. If you look at those types of state­ments, then you real­ly can­not be speak­ing about respon­si­bil­i­ty and ethics.

Steinbeck’s obser­va­tion shows us that peo­ple admire good­ness but, at the same time, admire the pro­duce of bad and evil behav­iour and wish to attain it. Since machines, Big Data, algo­rithms etc. are per­ma­nent­ly opti­mis­ing gains, always work­ing faster, increas­ing and improv­ing pro­duc­tiv­i­ty and, at the end of the process in ques­tion, increas­ing prof­its, then these means pro­mote the “Mam­mon” of peo­ple: Greed, extrem­ism, prej­u­dice, intem­per­ance, igno­rance, rival­ry, max­imi­sa­tion etc.

An ear­li­er inter­view with Thomas Leyen­er made clear that part of a person’s per­son­al­i­ty includ­ed the free­dom to make an eth­i­cal or moral deci­sion. The occi­den­tal view of being a per­son encom­pass­es the free­dom of a per­son, and this very free­dom also means being free to do Good or Evil.  Thus, if a per­son must be free in order to do good, then a per­son will sure­ly realise that this lib­er­ty also means that a per­son is free to do the oppo­site of good, that is, evil. This implies, there­fore, that this free­dom to do good can­not be sep­a­rat­ed from the respon­si­bil­i­ty we have to avoid evil. This cor­re­la­tion makes even clear­er that free­dom and respon­si­bil­i­ty cor­re­spond with one anoth­er and can­not be con­ceived in iso­la­tion from one anoth­er. Who­ev­er wish­es to act as a free indi­vid­ual, is thus required by neces­si­ty to take respon­si­bil­i­ty for his/her actions. If we fol­low this argu­ment to its log­i­cal con­clu­sion then does this not mean that peo­ple will become less free with increas­ing lev­els of automa­tion and dig­i­tal­i­sa­tion, espe­cial­ly if they are set as equals to these non-human entities?

Yes, the radius of free­dom is becom­ing more and more con­strict­ed. Since machines favour quan­ti­ta­tive aspects and, in this way, “Mam­mon”, the path of free­dom to decide between good and evil will become that much nar­row­er. With the advent of automa­tion, infor­ma­tion over­load, social media, lack of trans­paren­cy and Agno­tol­ogy, it will become ever more dif­fi­cult for a per­son to fig­ure out what is required from him/her or what s/he can decide on his/her own.

Do you recall the new prod­ucts from the finan­cial sec­tor, the deriv­a­tives? Many have tried to under­stand this prod­uct and even most bank­ing con­sul­tants have not been able to explain what these prod­ucts real­ly are.  They just had no clue. Nev­er­the­less, these deriv­a­tives are still being offered. In this way, we can see that the more com­pli­cat­ed the pro­ce­dure is, the need­i­er a per­son becomes and thence his/her free­dom to make deci­sions will become more restrict­ed. Fur­ther­more, with the means stem­ming from today’s media (the Soma pills), peo­ple will be per­suad­ed to participate.

The eth­i­cal chal­lenge con­cern­ing auto­mat­ed man­age­ment process­es involves find­ing the right pro­por­tion between the ben­e­fit from height­ened secu­ri­ty or avoid­ance of mis­takes owing to automa­tion and yet avoid­ing giv­ing the impres­sion that one could sur­ren­der one’s respon­si­bil­i­ty to a non-human enti­ty. What would the “gold­en mean” look like?

It goes with­out say­ing that a non-human enti­ty, mechan­i­cal or algo­rith­mic enti­ty does not have any val­ues, at least not by today’s tech­ni­cal stan­dards. Peo­ple have val­ues, and if tomor­row a non-liv­ing enti­ty had val­ues, then it would be com­pa­ra­ble with God, who has giv­en peo­ple val­ues. Of course, peo­ple would rather give their cares over to a non-human enti­ty, a machine, but the machine is only a means, and not Satan. Com­pare this to the ques­tion: Who is respon­si­ble for killing a per­son? The knife or the human agent? The knife, just as with a machine, can­not be held respon­si­ble; only a per­son can.

If Big Data or algo­rithms or a machine are being used, which is real­ly just a busi­ness devel­op­ment plan, then these tools process the busi­ness plan by a com­plete usage of log­ic. That is to say, it is still a per­son who is respon­si­ble for the busi­ness plan, as men­tioned ear­li­er. The per­son in ques­tion deter­mines which prob­lem is to be solved and if the solu­tion is to be intro­duced on to the market.

God is respon­si­ble for hav­ing cre­at­ed humans and, in the end, God is also respon­si­ble for what he has done. Goethe expressed it best in Faust, when Mephis­to says: „Ich bin Teil jen­er Kraft, die stets das Böse will und stets da Gute schafft.“2

This is a bril­liant thought because what the quo­ta­tion says is that regard­less of what we do, some­thing can always hap­pen which will lead us to some­thing dif­fer­ent and we can­not place respon­si­bil­i­ty in the inter­im on machines. Only a per­son, the sub­ject, is responsible.

The ques­tion remains: can human free­dom exist with the req­ui­site respon­si­bil­i­ty, for with com­plex tech­no­log­i­cal devel­op­ment, is such free­dom even pos­si­ble for any indi­vid­ual? Espe­cial­ly since no indi­vid­ual can pos­si­bly under­stand auto­mat­ed man­age­ment process­es because of sys­tems which are con­tin­u­ous­ly improv­ing and learn­ing as well as owing to oth­er process­es which are run­ning in the back­ground, and, giv­en this sit­u­a­tion, no indi­vid­ual could ever man­age eval­u­at­ing the process in ques­tion from an eth­i­cal per­spec­tive. In this way, an eth­i­cal dilem­ma has come to pass: Being respon­si­ble and yet, sim­ply stat­ed, not pre­cise­ly know­ing what it is I am doing. How would you approach this dilemma?

Let us con­sid­er the tele­vi­sion which sends us grue­some mov­ing imag­ines, for exam­ple, the rape of a child, right into our liv­ing rooms. Tech­ni­cians are able to under­stand how these images are trans­mit­ted to our hous­es, but not most people.

In this sit­u­a­tion, we can speak of sev­er­al peo­ple bear­ing respon­si­bil­i­ty: The man­u­fac­tur­er and sell­er of the tele­vi­sion, but also the per­son respon­si­ble for the con­tent as well as the view­er, who is watch­ing such grue­some con­tent or has decid­ed to have a tele­vi­sion in his/her liv­ing room.

In between all of us, there is a huge and com­plex process. At the begin­ning and end there is the per­son respon­si­ble who has the free­dom to act dif­fer­ent­ly and to decide.  I will not, how­ev­er, be able to remove myself com­plete­ly from the process because the con­tent in ques­tion must be con­sid­ered as well.

It will become even more dif­fi­cult to avoid cer­tain types of media, the Soma pills. If I do not own a tele­vi­sion, smart­phone or I can­not be reached via email or What­sApp, then life will become even more dif­fi­cult. The pres­sure to take Soma drugs is grow­ing, and alter­na­tives can scarce­ly be found.

Soma eco-sys­tems are being cre­at­ed, in which one signs up in only three clicks, such as with Ama­zon, but which are far more dif­fi­cult to dis­en­gage one­self from or change to anoth­er “eco-sys­tem”. And this sep­a­ra­tion process will cer­tain­ly be made dif­fi­cult by one of the peo­ple respon­si­ble. The with­draw­al from Soma drugs needs time and ener­gy, and the peo­ple respon­si­ble for the “eco-sys­tems” in ques­tion are com­pet­ing for every customer.

It is thus appar­ent how lim­it­ed the free­dom to decide and bear respon­si­bil­i­ty is. 

Process­es are becom­ing more and more com­pli­cat­ed and require more and more mul­ti-dis­ci­pli­nary skills, and, if we think that free­dom and respon­si­bil­i­ty are depen­dent on one anoth­er, then can this con­di­tion even be ful­filled giv­en the com­plex­i­ty of the process­es in ques­tion? The only rem­e­dy is trust. Who is respon­si­ble for trust? Gov­ern­ments, health insur­ance, tax author­i­ties, com­pa­nies such as Google, Ama­zon, Face­book etc.?

Yes, the world becomes a more and more com­pli­cat­ed place, which is less trans­par­ent and less com­pre­hen­si­ble. “Hav­ing trust” as a solu­tion? Trust is very close to responsibility. 

There is an ele­ment, how­ev­er, in trust that I don’t like: The ele­ment of blind­ness. Trust will become an ali­bi for not being able to explain a process or operation.

Trust is a high­ly indi­vid­ual mat­ter because it involves what a per­son believes in with­out know­ing what it is.

I am remind­ed of a quo­ta­tion by Hans Jonas: „Han­dele so, dass die Wirkun­gen Dein­er Hand­lun­gen verträglich sind mit der Per­ma­nenz echt­en men­schlichen Lebens auf Erden“[3]. This sen­tence is a per­fect union of these mat­ters: trust, respon­si­bil­i­ty and their reper­cus­sions, behav­iour, belief, val­ues – it is all there.

I am also remind­ed of the Lit­tle Prince by Saint Exupéry: When the prince puts his faith in the fox and the fox tells the prince: “Peo­ple have for­got­ten this truth. But you must­n’t for­get it. You become respon­si­ble for­ev­er for what you’ve tamed. “ 

So, the clos­er we work togeth­er, the more human and greater the respon­si­bil­i­ty is that we bear for one anoth­er. That is some­thing a machine can­not do.

You also asked who bears respon­si­bil­i­ty for trust. With the advent of dig­i­tal­i­sa­tion, we can cer­tain­ly observe a dis­tor­tion of pow­er from gov­ern­ments all the way to the pri­vate sec­tor which is like a tec­ton­ic shift­ing of responsibility.

It used to be that gov­ern­ments and their admin­is­tra­tions were respon­si­ble for the well-being of the peo­ple, with either a Maria There­sa or a Hitler at the head of state.

Today Google, Ama­zon, Face­book, Aliba­ba and oth­ers are high­ly influ­en­tial, but they don’t give a damn what one gov­ern­ment might want or what con­sti­tu­tion­al fidu­cia­ry duties the gov­ern­ments in ques­tion might have. They sat­is­fy expec­ta­tion and are them­selves dri­ven by Mammon. 

Even if so many awful, untrue things, indeed, so many down­right lies are being con­veyed in the social media, those respon­si­ble for the social media in ques­tion are quick to affirm that every­one has the right to do what s/he wants.

GAFAM freed them­selves long ago from the con­straints of big gov­ern­ments. They con­sid­er them­selves to be inde­pen­dent and are part of a world where Mam­mon is the goal. The means which they employ is the stok­ing and steer­ing of expec­ta­tions, the pro­pa­gan­da as per Edward Bernay or Aldous Huxley’s Soma drug, which turns peo­ple into needy addicts. In this way, peo­ple will be less free, and the radius of free­dom will become small­er than ever.

And peo­ple as well as gov­ern­ments are not even aware that their sphere of free­dom is shrink­ing in every dimen­sion. Hon­oré de Balzac might call it „La peau de cha­grin “(the skin of dis­ap­point­ment). Gov­ern­ments can only deter­mine the space that busi­ness has left them. They no longer make deci­sions, because politi­cians look to their re-elec­tions, their own advance­ment, their pow­er and influ­ence and pay lit­tle regard to the needs of citizens.

Indeed, the Brave New World which Hux­ley already described in the 1930s is upon us and we have become part of this devel­op­ment for far longer than we realise. In an inter­view in the 1960s, Hux­ley was asked if he believed his vision would devel­op in that way. To this ques­tion, he replied that he had proph­e­sied this devel­op­ment for 2034. He only got the date wrong because the devel­op­ment had already start­ed in the 1960s and was pro­ceed­ing in accor­dance with his book.

This beau­ti­ful new world has come to pass faster than we could have imag­ined. It is sure­ly no coin­ci­dence that Hux­ley men­tioned Hen­ry Ford in con­nec­tion with the intro­duc­tion of Tay­lorism (sequenc­ing of work).

Each sequence is opti­mised to per­fec­tion and beyond until pro­duc­tion and pro­duc­tiv­i­ty pro­ceed opti­mal­ly. Respon­si­bil­i­ty for the entire process can­not be Tay­lorised because it is sim­ply not pos­si­ble. Of course, one could attempt it, but the per­son respon­si­ble would lose his/her cred­i­bil­i­ty if s/he were to tell his/her employ­ees s/he were not respon­si­ble for any­thing because every­thing was run­ning so per­fect­ly due to Tay­loriza­tion and automation.

Mr Goeude­vert, thank you for shar­ing your insights on the mean­ing of respon­si­bil­i­ty in the dig­i­tal age.

Thank you, Dr Cal­daro­la, and I look for­ward to read­ing your upcom­ing inter­views with rec­og­nized experts, delv­ing even deep­er into this fas­ci­nat­ing topic.


1The Water Lily Principle”

2 “I am part of that pow­er which always wants evil and always cre­ates what is good.”

3 “Act so that the effects of your actions are com­pat­i­ble with the per­ma­nence of true human life on earth.”

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Daniel Goeudevert

Daniel Goeudevert is a French man of letters, automobile industry manager and a business consultant. He has worked and lived in Germany for many years.

At first Mr. Goeudevert was CEO of the German Ford-Werke AG, but was later CEO for purchasing with Volkswagen AG.

Daniel Goeudevert was a member of numerous supervisory boards of international companies. He was a member of the Club of Rome and was the first vice-president of the International Green Cross, the well-known environmental protection foundation which was initiated by Mikhail Gorbachev. Since 1998 he has been vice-president of FEDRE (Fondation Européenne pour le Développement durable des Régions = European Foundation for the Sustainable Development of the Regions.). Also beginning in 1998 Mr. Goeudevert is vice-president of EFI (Europe Finance et Industry). He also serves as a consultant for UNESCO.

Daniel Goeudevert wrote an autobiography entitled Wie ein Vogel im Aquarium, which was a bestseller in Germany. Since then, he has published numerous books. Mit Träumen beginnt die Realität – Aus dem Leben eines Europäers was also on the bestseller list for a long time.

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

FOL­LOW ME