Big Data, moral­i­ty and infor­ma­tion­al self-determination

Dr Thomas Leyener

With the advent of the GDPR, the EU has also intro­duced the con­cept of infor­ma­tion­al self-deter­mi­na­tion with regard to data pro­cess­ing. What does infor­ma­tion­al self-deter­mi­na­tion mean? From what moral and cul­tur­al norms or even state forms has this con­cept devel­oped? Why has this right become estab­lished in Europe while not being found in most regions of the world? 

In her Duet Inter­view with the­o­log­i­cal expert and philoso­pher Dr Thomas Leyen­er, Dr Cal­daro­la, author of Big Data and Law, dis­cuss­es dif­fer­ent eth­i­cal views and behav­iours when using Big Data.

Infor­ma­tion­al self-deter­mi­na­tion is based upon which moral basic prin­ci­ple and how has it developed?

Dr Thomas Leyen­er: We can only mean­ing­ful­ly speak of infor­ma­tion­al self-deter­mi­na­tion if a per­son has already been grant­ed his1 own per­son­al­i­ty and indi­vid­u­al­i­ty. If a human can be viewed as a “per­son”, then we can derive per­son­al rights from this idea.

If we con­sid­er the con­cept of a “per­son” from a Chris­t­ian as well as from a philo­soph­i­cal per­spec­tive, then we soon real­ize that, when we refer to a per­son, we mean the unique­ness, the val­ue and dig­ni­ty and the dis­tinc­tive indi­vid­u­al­i­ty of a human. All humans are in each case a per­son, although we are con­nect­ed in many ways: I share the same sex with many of them, we all have rough­ly the same phys­i­o­log­i­cal process­es tak­ing place in our bod­ies; With a cor­rect match, a per­son can live with the organs of anoth­er etc. And yet each per­son is the only one of its kind and is thus unique. We are all a per­son – each and every one of us is an individual.

Part of a human’s per­son­al­i­ty is the “free­dom to make ethical/moral deci­sions”: Accord­ing to the West­ern view of human­i­ty, a view which has been sig­nif­i­cant­ly shaped by Chris­t­ian val­ues, the free­dom intrin­sic to a human includes the free­dom to do Good or Evil. Even the first sto­ry in the Bible, the cre­ation of man in the Gar­den of Eden, alludes to this sub­ject, when Cain kills his brother.

The dig­ni­ty of a per­son also con­sists of being able to make eth­i­cal deci­sions and indeed hav­ing to make them. Because eth­i­cal deci­sions and eth­i­cal judge­ments are con­cerned with the ques­tion of Good and Evil, such a deci­sion must be made freely and in com­plete lib­er­ty. Doing Good, for exam­ple, lov­ing a per­son, help­ing and sup­port­ing some­one, feed­ing, edu­cat­ing and rais­ing a human etc., all these actions can only be done freely. A high­er form of good­ness is love; Love can­not be forced: I am not able to love against my will.

Only in the pres­ence of free­dom can such val­ues be real­ized – such as sup­port­ing one anoth­er, sol­i­dar­i­ty, or love. These val­ues, or atti­tudes, can only be con­ceived as freely giv­en by a person.

If a human as a per­son must be at lib­er­ty to be able to do Good, he will imme­di­ate­ly real­ize that togeth­er with the free­dom to do Good the free­dom to do the oppo­site of Good, name­ly, Evil, is also an intrin­sic part of this lib­er­ty. This means that part of the free­dom to do Good invari­ably includes the respon­si­bil­i­ty to avoid wicked­ness. This view shows us clear­ly that free­dom and respon­si­bil­i­ty cor­re­spond to one anoth­er and can­not be con­ceived of sep­a­rate­ly from one anoth­er. Who­ev­er wish­es to act as a free indi­vid­ual must nec­es­sar­i­ly also be respon­si­ble for what he does.

The free­dom to make eth­i­cal deci­sions must there­fore be kept open by the state and by reli­gion and can­not be lim­it­ed to any sig­nif­i­cant degree. Cor­re­spond­ing­ly, how­ev­er, the respon­si­bil­i­ty we just spoke of can be enforced and pun­ished if this very respon­si­bil­i­ty is not assumed or has been sig­nif­i­cant­ly impinged upon.

What about the respon­si­bil­i­ty of the indi­vid­ual in the dig­i­tal world? Already with the advent of the Indus­tri­al Rev­o­lu­tion and the con­comi­tant automa­tion of many process­es, peo­ple began to rely more and more on machines and automa­tons and less on their own per­cep­tions and abil­i­ty to judge. We can even say that with the intro­duc­tion of machine lines and sup­ply chains that peo­ple have become small cogs who are hard­ly in a posi­tion to have an overview of the entire process and its com­plex­i­ty or per­haps only with great dif­fi­cul­ty. The ques­tion is then: Has this automa­tion tak­en away a great deal of the respon­si­bil­i­ty of an indi­vid­ual and, if we con­sid­er the intro­duc­tion of fur­ther dig­i­tal mech­a­ni­sa­tion, the use of algo­rithms, robots etc., will this devel­op­ment take away even more from our sense of indi­vid­ual responsibility?

If free­dom and respon­si­bil­i­ty are insep­a­ra­ble, belong togeth­er and are to be con­ceived in rela­tion to one anoth­er, then, at the very moment when respon­si­bil­i­ty has been sur­ren­dered, free­dom has been yield­ed as well. In addi­tion, if being free and being respon­si­ble for one’s actions are char­ac­ter­is­tics of one’s per­son­al­i­ty, then it must also fol­low that a per­son can nev­er give up his respon­si­bil­i­ty, or man­age­ment process­es have to be con­ceived with­in a frame­work of human respon­si­bil­i­ty. It is hard to imag­ine a human being trans­fer­ring his respon­si­bil­i­ty to a machine or a man­age­ment process. When an air­plane in autopi­lot mode, or an auto­mat­i­cal­ly dri­ven car has an acci­dent, then the machine has not act­ed irre­spon­si­bly and it will not be pos­si­ble to assert that a machine is guilty of a mis­de­meanour- rather, we can only refer these actions to a per­son behav­ing as a sub­ject act­ing freely.

The eth­i­cal chal­lenge with regard to auto­mat­ed man­age­ment process­es con­sists of find­ing the right bal­ance between the advan­tage aris­ing from a gain in secu­ri­ty or avoid­ing errors owing to the automa­tion and, at the same time, avoid­ing the impres­sion that one could sur­ren­der one’s respon­si­bil­i­ty to a non-per­son­al entity.

Basi­cal­ly, there is, in my opin­ion, a require­ment to main­tain the pos­si­bil­i­ty to be able to expe­ri­ence human free­dom in con­junc­tion with the req­ui­site respon­si­bil­i­ty. This still rais­es the ques­tion of whether this is even fea­si­ble for the indi­vid­ual, giv­en the exist­ing com­plex tech­ni­cal devel­op­ment. Fur­ther­more, this tech­ni­cal devel­op­ment has already made it vir­tu­al­ly impos­si­ble for the indi­vid­ual, with regard to auto­mat­ed man­age­ment process­es, or, in the con­text of sys­tems that learn on their own, to have an overview of the var­i­ous process­es oper­at­ing in the back­ground- let alone eval­u­ate them from an eth­i­cal per­spec­tive. Thus, an eth­i­cal dilem­ma aris­es: On the one hand, to be respon­si­ble, and, on the oth­er hand, to state it sim­ply, to not know exact­ly what I am doing when I take part, to name an exam­ple, in social networks.

This ques­tion must be fur­ther con­sid­ered elsewhere.

My favorite quote:


“Lib­er­ty means respon­si­bil­i­ty. That is why most men dread it.”

George Bernard Shaw

Par­tic­u­lar­ly with regard to algo­rithms, it should be eas­i­er for all of us as indi­vid­u­als to con­sid­er and decide on a vari­ety of issues because algo­rithms can cor­re­late many para­me­ters and fac­tors in a quick and easy man­ner for us. Algo­rithms are being used more and more and we already can­not seem to do with­out them- or indeed avoid them. They are in our cars, homes, in short, in our every­day lives. Algo­rithms are by no means neu­tral and can be manip­u­lat­ed, thus influ­enc­ing the deci­sions of the indi­vid­ual or a group. Is it pos­si­ble to speak of free­dom and respon­si­bil­i­ty if algo­rithms are teamed up with crim­i­nal forces? Can the indi­vid­ual per­son even decide about free­dom, respon­si­bil­i­ty and ethics or does a small group of know-how car­ri­ers or pow­er­ful peo­ple decide on free­dom, respon­si­bil­i­ty and ethics? Can we even still speak of self-deter­mi­na­tion or should­n’t we be talk­ing about for­eign deter­mi­na­tion? In oth­er words, has­n’t the basic prin­ci­ple of infor­ma­tion­al self-deter­mi­na­tion in data pro­tec­tion law sim­ply become win­dow dress­ing because of the tech­ni­cal envi­ron­ment sur­round­ing it?

If it can be guar­an­teed that a per­son can still be in a posi­tion to make informed deci­sions because he has been award­ed full and trans­par­ent dis­clo­sure on the fact that an algo­rithm has tak­en over man­ag­ing some process and how this is tak­ing place, then the respon­si­bil­i­ty of the indi­vid­ual or of a soci­ety is again at the fore­front. The pos­si­ble abuse of the tech­nol­o­gy involved in algo­rithms being in con­trol does not nec­es­sar­i­ly mean they should not be used. In my opin­ion, the key issue in this eth­i­cal debate lies before this par­tic­u­lar tech­nol­o­gy is to be used. Is the tech­nol­o­gy in ques­tion trans­par­ent enough and capa­ble of being inde­pen­dent­ly mon­i­tored to rec­og­nize that it is being abused or is this abuse “hid­den” in the fraud­u­lent­ly pro­duced “new real­i­ty via new data”? If this ques­tion is posed cor­rect­ly, then we are real­ly just talk­ing about com­plete trans­paren­cy in the con­struc­tion and appli­ca­tion of algorithms.

Each and every one of us is over­whelmed when it comes to lead­ing an inde­pen­dent eth­i­cal dis­course or mak­ing a deci­sion based on dif­fer­en­ti­at­ed reflec­tion of the mat­ter. The indi­vid­ual requires, to a cer­tain extent, the “sup­port” of inde­pen­dent and expert insti­tu­tions which ensure that a suit­able pub­lic dis­course is tak­ing place and to guide that per­son in terms of the con­tent- pub­lic facil­i­ties or insti­tu­tions can take on this role (e.g., the Ger­man Ethics Coun­cil, Insti­tu­tions, expert soci­eties, jour­nal­ists, even church­es or polit­i­cal par­ties) if they appear to be knowl­edge­able and cred­i­ble con­cern­ing the mat­ter in question.

What­ev­er form the eth­i­cal dis­course is to take, giv­en the high­ly com­plex con­di­tions and top­ics involved, this issue has to be fur­ther dis­cussed elsewhere.

Chi­na is push­ing towards a dig­i­tal future at a dizzy­ing speed and aims to be the glob­al mar­ket leader in all mat­ters hav­ing to do with AI. By the end of 2020, there are to be 600 mil­lion sur­veil­lance cam­eras in all pub­lic spaces. Each step and every move­ment is to be record­ed- and not just using sur­veil­lance cam­eras. Every­thing is trans­par­ent and vis­i­ble. At train sta­tions, peo­ple are not only being x‑rayed, they are open­ly being exposed by mak­ing pub­lic, for exam­ple, if tax­es have been paid, traf­fic reg­u­la­tions flout­ed or envi­ron­men­tal require­ments abid­ed by. Every­thing is being col­lect­ed and eval­u­at­ed in a cen­tral data bank known as the “Social Cred­it Sys­tem”, a rat­ing sys­tem based on data which is meant to influ­ence the behav­iour of peo­ple. Kai Strittmat­ter, Ran­ga Yogesh­war as well as Shoshana Zuboff point to an image of sur­veil­lance cap­i­tal­ism. Is anoth­er sort of ethics at play there? If so, is it because the form of the state is dif­fer­ent? What are the rea­sons and what are the dif­fer­ences com­pared to Europe?

I think the dif­fer­ence you refer to is part­ly based on our West­ern his­to­ry of phi­los­o­phy as well as our Chris­t­ian his­to­ry which both led to a philo­soph­i­cal enlight­en­ment empha­siz­ing the sig­nif­i­cance of the indi­vid­ual. Even today a state­ment of Immanuel Kant which is over 250 years old can be seen as a sum­ma­ry of this enlight­en­ment: “Habe Mut, dich deines eige­nen Ver­standes zu bedi­enen2 .“

If the indi­vid­ual has been accord­ed free­dom as well as hav­ing his own intel­lect, togeth­er with a cor­re­spond­ing abil­i­ty to judge, then this con­di­tion is for the time being con­sid­ered to be a gen­er­al one and is attached to the dig­ni­ty of the per­son and is also inde­pen­dent of one’s intel­lect and social standing.

The behav­iour of the indi­vid­ual is then con­front­ed with what is con­sid­ered to be valid with­in the group or with­in soci­ety. What “one does” does not nec­es­sar­i­ly have to con­form with “what I do” but what I do must take into account its effect upon the com­mu­ni­ty- indi­vid­ual action then bears a cer­tain respon­si­bil­i­ty with regard to social and com­mu­nal actions. This is par­tic­u­lar­ly clear when we look at hon­esty in pay­ing tax­es or eco­log­i­cal accountability.

Moral action has to be plau­si­ble and jus­ti­fied to the indi­vid­ual and it has to come about in the form of a dia­logue, tak­ing into con­sid­er­a­tion the require­ments and claims of soci­ety. That is what is meant by eth­i­cal discourse.

The actions of the indi­vid­ual have an effect on “the whole” (e.g., wear­ing a mask in the COVID-19 pan­dem­ic) and the whole has an effect on the indi­vid­ual (e.g., lim­it­ing rights of free­dom in order to break infec­tious cycles). This must be set­tled in an eth­i­cal discourse.

Data pro­tec­tion laws vary all over the world and not every coun­try has turned infor­ma­tion­al self-deter­mi­na­tion into a basic prin­ci­ple. On the oth­er hand, data are cross­ing bor­ders every sec­ond and trav­el all over the globe. Glob­al­i­sa­tion is part of our dai­ly lives and the tech­nol­o­gy involved in data pro­cess­ing does not stop at the bor­der. Should there be a gen­er­al and uni­form under­stand­ing in how we deal with data- a uni­form eth­ic- so that projects, such as Big Data, Indus­try 4.0 or dig­i­tal eco-sys­tems can be achieved? Do we need a dis­course and how would it devel­op in the face of no coun­try want­i­ng to be behind in terms of being competitive?

If tech­no­log­i­cal and eco­nom­ic devel­op­ment can only take place and be viewed at a glob­al lev­el, then we actu­al­ly need glob­al ethics, which is to say an eth­i­cal dia­logue to accom­pa­ny these developments.

In antiq­ui­ty the advent of trade had already led to a cul­tur­al exchange. Our terms “ethos” and “moral” go back to the words used by the Greeks and Romans to describe the behav­iour of oth­er peo­ples. Trade has since become a world-wide phe­nom­e­non, mean­ing there is already a glob­al cul­tur­al exchange tak­ing place. This in turn has led to the com­plex sit­u­a­tion of com­mu­ni­cat­ing by means of the prin­ci­ples of eth­i­cal deci­sions, or legal log­ic, as well as includ­ing eth­i­cal and social stan­dards in trade agree­ment con­tracts, as is now being attempt­ed in the case of con­tracts con­cern­ing Brexit.

A rea­son for found­ing the World Ethos Insti­tute in Tübin­gen 25 years ago has to do with the glob­al devel­op­ment we have just described. A goal of the insti­tute is to estab­lish a pre­req­ui­site by which the eth­i­cal aspects of indus­tri­al devel­op­ments are nego­tiable. This aim is to be achieved by joint learn­ing process­es and via a dia­logue of the religions.

Which prac­tices in terms of deal­ing with data will become stan­dard? What is active­ly being done to find a con­sen­sus on how data is to be han­dled? Is an active debate on data han­dling cur­rent­ly tak­ing place? Is the World Ethos Insti­tute lead­ing this dis­course and what results or rather inter­me­di­ary results exist? Are dif­fer­ences being ignored? Are the tech­no­log­i­cal pos­si­bil­i­ties the only ele­ment to be set­ting the tone?

In the past, an eth­i­cal debate or an eth­i­cal dis­course on tech­no­log­i­cal, eco­nom­ic and polit­i­cal devel­op­ments took place much lat­er than the events them­selves or only when unde­sir­able con­se­quences had made them­selves notice­able. We must also take into account “the pow­er of the sit­u­a­tion at hand”: what­ev­er is pos­si­ble, will be done (On this, cf. the free­dom of being able to do Good or Evil).

Tak­ing all of this into con­sid­er­a­tion, it becomes obvi­ous that an eth­i­cal dis­course has to be held at the begin­ning of a devel­op­ment and not lag behind it. The lengthy debate on cli­mate change and on eco­log­i­cal­ly respon­si­ble eco­nom­ic activ­i­ty has shown that val­ues, such as sus­tain­abil­i­ty and eco­log­i­cal respon­si­bil­i­ty, have become estab­lished and have also led to eco­nom­ic con­cepts chang­ing and these val­ues being includ­ed. In the end, this must all sure­ly mean that sus­tain­able eco­nom­ics can be profitable.

The World Ethos Insti­tute believes that glob­al learn­ing process­es, to name one exam­ple, are nec­es­sary for politi­cians and those in man­age­ment posi­tions to com­mu­ni­cate with one anoth­er on val­ues, such as sus­tain­abil­i­ty, eco­log­i­cal account­abil­i­ty, rights of free­dom etc., and part of the rea­son for its found­ing includ­ed the belief that com­mu­ni­ca­tion process­es of this type require a dia­logue among the var­i­ous reli­gions. This belief seems obvi­ous since, from a his­tor­i­cal point of view, reli­gions shape our “mind set” and can pos­si­bly lead to cre­at­ing peace.

Chi­na invests sig­nif­i­cant­ly more cap­i­tal in dig­i­tal­i­sa­tion than oth­er coun­tries do. Owing to the size of their pop­u­la­tion, untold amounts of data are avail­able to the Chi­nese, data which are essen­tial for a dig­i­tal future. Chi­na is ahead of the game in this dig­i­tal rev­o­lu­tion. Are these all pre­req­ui­sites for decid­ing to process data Chi­nese-style at the glob­al level?

Eth­i­cal deci­sions are very com­plex because they are often dilem­ma deci­sions, giv­en that there is always a trade-off involv­ing decid­ing whether or not some ben­e­fit is jus­ti­fied, if, at the same time, some sort of col­lat­er­al dam­age is unavoid­able. They have to be shaped in such way so that the ben­e­fit in pro­por­tion to the harm being done can still be rec­og­nized and desired by the major­i­ty. Alter­na­tive­ly, a ben­e­fit can only be achieved if some dam­age is accept­ed at the same time, which is often the case in med­ical eth­i­cal deci­sions. This trade-off has to be con­sid­ered and accept­ed by a majority.

An eco­nom­ic ben­e­fit could hard­ly jus­ti­fy a deci­sion on its own account if, at the same time, mas­sive infringe­ments upon free­dom, self-deter­mi­na­tion, dig­ni­ty etc. were to fol­low. One could also tie the neces­si­ty of an eth­i­cal dis­course to the eco­nom­ic ques­tion: Can we afford to give up the con­comi­tant dis­cus­sion and con­sid­er­a­tion of eth­i­cal con­se­quences of a cer­tain devel­op­ment- or, if we waive the rights to an eth­i­cal reflec­tion, will this renun­ci­a­tion sim­ply lead to even greater eco­nom­ic or social harm in the full­ness of time?

A dia­logue, dis­course as well as a con­sid­er­a­tion of a cer­tain trade-off all require time in a demo­c­ra­t­ic sys­tem. Do we have the time for such a sig­nif­i­cant process? Or will we be over­tak­en by dif­fer­ent sorts of non- demo­c­ra­t­ic coun­tries as well as by oth­er types of ethics because they can react more quick­ly. Speed, favoured by tech­nol­o­gy in dig­i­tal­i­sa­tion, is a sig­nif­i­cant fac­tor. Will this char­ac­ter­is­tic be fatal to Europe, democ­ra­cy, ethics and Chris­tian­i­ty? Or will Europe find a way to lead such a dis­course more quick­ly and to imple­ment the results of a major­i­ty con­sen­sus in order not to lag behind in the glob­al market?

It is always a nov­el argu­ment to assert that eco­nom­ic ben­e­fits and eth­i­cal dis­course do not can­cel each oth­er out or deter one anoth­er. On the con­trary, they are even eco­nom­i­cal­ly suc­cess­ful. Exam­ples for this real­i­sa­tion can be found in eco­log­i­cal debates or in the con­cept of a social mar­ket econ­o­my: Owing to eco­log­i­cal­ly respon­si­ble behav­iour, new prod­ucts are cre­at­ed and, togeth­er with them, the con­comi­tant mar­kets. A social mar­ket econ­o­my always attempts to solve the eth­i­cal dilem­ma of wealth and an equi­table dis­tri­b­u­tion of it. Sus­tain­abil­i­ty and social stan­dards in pro­duc­tion and trade (“sup­ply chain law”) have become well-estab­lished and rec­og­nized val­ues. Such devel­op­ments give me hope that, before some irre­versible dis­as­ter comes to pass in one area or anoth­er, our inher­ent com­mon sense and eth­i­cal reflec­tion will pro­vide the nec­es­sary input. A dis­cus­sion such as yours, Dr Cal­daro­la, con­tributes to this process gain­ing the upper hand in time.

Thank you for your kind words. I look for­ward to your com­ments con­cern­ing my next Duets.

1 For the sake of brevi­ty, the mas­cu­line form is used sole­ly for the sake of simplicity.

2 Have the courage to use your own mind.

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Dr Thomas Leyener

Dr Thomas Leyener, born in 1955, married and father of four grown-up daughters as well as grandfather of nine grandchildren. He studied Catholic theology, focusing on the boundary issues of theology and psychology. He was active for many years in advanced education. He now works in a hospital managed by a Christian corporation.

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.