With the advent of the GDPR, the EU has also introduced the concept of informational self-determination with regard to data processing. What does informational self-determination mean? From what moral and cultural norms or even state forms has this concept developed? Why has this right become established in Europe while not being found in most regions of the world?
In her Duet Interview with theological expert and philosopher Dr Thomas Leyener, Dr Caldarola, author of Big Data and Law, discusses different ethical views and behaviours when using Big Data.
Informational self-determination is based upon which moral basic principle and how has it developed?
Dr Thomas Leyener: We can only meaningfully speak of informational self-determination if a person has already been granted his1 own personality and individuality. If a human can be viewed as a “person”, then we can derive personal rights from this idea.
If we consider the concept of a “person” from a Christian as well as from a philosophical perspective, then we soon realize that, when we refer to a person, we mean the uniqueness, the value and dignity and the distinctive individuality of a human. All humans are in each case a person, although we are connected in many ways: I share the same sex with many of them, we all have roughly the same physiological processes taking place in our bodies; With a correct match, a person can live with the organs of another etc. And yet each person is the only one of its kind and is thus unique. We are all a person – each and every one of us is an individual.
Part of a human’s personality is the “freedom to make ethical/moral decisions”: According to the Western view of humanity, a view which has been significantly shaped by Christian values, the freedom intrinsic to a human includes the freedom to do Good or Evil. Even the first story in the Bible, the creation of man in the Garden of Eden, alludes to this subject, when Cain kills his brother.
The dignity of a person also consists of being able to make ethical decisions and indeed having to make them. Because ethical decisions and ethical judgements are concerned with the question of Good and Evil, such a decision must be made freely and in complete liberty. Doing Good, for example, loving a person, helping and supporting someone, feeding, educating and raising a human etc., all these actions can only be done freely. A higher form of goodness is love; Love cannot be forced: I am not able to love against my will.
Only in the presence of freedom can such values be realized – such as supporting one another, solidarity, or love. These values, or attitudes, can only be conceived as freely given by a person.
If a human as a person must be at liberty to be able to do Good, he will immediately realize that together with the freedom to do Good the freedom to do the opposite of Good, namely, Evil, is also an intrinsic part of this liberty. This means that part of the freedom to do Good invariably includes the responsibility to avoid wickedness. This view shows us clearly that freedom and responsibility correspond to one another and cannot be conceived of separately from one another. Whoever wishes to act as a free individual must necessarily also be responsible for what he does.
The freedom to make ethical decisions must therefore be kept open by the state and by religion and cannot be limited to any significant degree. Correspondingly, however, the responsibility we just spoke of can be enforced and punished if this very responsibility is not assumed or has been significantly impinged upon.
What about the responsibility of the individual in the digital world? Already with the advent of the Industrial Revolution and the concomitant automation of many processes, people began to rely more and more on machines and automatons and less on their own perceptions and ability to judge. We can even say that with the introduction of machine lines and supply chains that people have become small cogs who are hardly in a position to have an overview of the entire process and its complexity or perhaps only with great difficulty. The question is then: Has this automation taken away a great deal of the responsibility of an individual and, if we consider the introduction of further digital mechanisation, the use of algorithms, robots etc., will this development take away even more from our sense of individual responsibility?
If freedom and responsibility are inseparable, belong together and are to be conceived in relation to one another, then, at the very moment when responsibility has been surrendered, freedom has been yielded as well. In addition, if being free and being responsible for one’s actions are characteristics of one’s personality, then it must also follow that a person can never give up his responsibility, or management processes have to be conceived within a framework of human responsibility. It is hard to imagine a human being transferring his responsibility to a machine or a management process. When an airplane in autopilot mode, or an automatically driven car has an accident, then the machine has not acted irresponsibly and it will not be possible to assert that a machine is guilty of a misdemeanour- rather, we can only refer these actions to a person behaving as a subject acting freely.
The ethical challenge with regard to automated management processes consists of finding the right balance between the advantage arising from a gain in security or avoiding errors owing to the automation and, at the same time, avoiding the impression that one could surrender one’s responsibility to a non-personal entity.
Basically, there is, in my opinion, a requirement to maintain the possibility to be able to experience human freedom in conjunction with the requisite responsibility. This still raises the question of whether this is even feasible for the individual, given the existing complex technical development. Furthermore, this technical development has already made it virtually impossible for the individual, with regard to automated management processes, or, in the context of systems that learn on their own, to have an overview of the various processes operating in the background- let alone evaluate them from an ethical perspective. Thus, an ethical dilemma arises: On the one hand, to be responsible, and, on the other hand, to state it simply, to not know exactly what I am doing when I take part, to name an example, in social networks.
This question must be further considered elsewhere.
My favorite quote:
“Liberty means responsibility. That is why most men dread it.”George Bernard Shaw
Particularly with regard to algorithms, it should be easier for all of us as individuals to consider and decide on a variety of issues because algorithms can correlate many parameters and factors in a quick and easy manner for us. Algorithms are being used more and more and we already cannot seem to do without them- or indeed avoid them. They are in our cars, homes, in short, in our everyday lives. Algorithms are by no means neutral and can be manipulated, thus influencing the decisions of the individual or a group. Is it possible to speak of freedom and responsibility if algorithms are teamed up with criminal forces? Can the individual person even decide about freedom, responsibility and ethics or does a small group of know-how carriers or powerful people decide on freedom, responsibility and ethics? Can we even still speak of self-determination or shouldn’t we be talking about foreign determination? In other words, hasn’t the basic principle of informational self-determination in data protection law simply become window dressing because of the technical environment surrounding it?
If it can be guaranteed that a person can still be in a position to make informed decisions because he has been awarded full and transparent disclosure on the fact that an algorithm has taken over managing some process and how this is taking place, then the responsibility of the individual or of a society is again at the forefront. The possible abuse of the technology involved in algorithms being in control does not necessarily mean they should not be used. In my opinion, the key issue in this ethical debate lies before this particular technology is to be used. Is the technology in question transparent enough and capable of being independently monitored to recognize that it is being abused or is this abuse “hidden” in the fraudulently produced “new reality via new data”? If this question is posed correctly, then we are really just talking about complete transparency in the construction and application of algorithms.
Each and every one of us is overwhelmed when it comes to leading an independent ethical discourse or making a decision based on differentiated reflection of the matter. The individual requires, to a certain extent, the “support” of independent and expert institutions which ensure that a suitable public discourse is taking place and to guide that person in terms of the content- public facilities or institutions can take on this role (e.g., the German Ethics Council, Institutions, expert societies, journalists, even churches or political parties) if they appear to be knowledgeable and credible concerning the matter in question.
Whatever form the ethical discourse is to take, given the highly complex conditions and topics involved, this issue has to be further discussed elsewhere.
China is pushing towards a digital future at a dizzying speed and aims to be the global market leader in all matters having to do with AI. By the end of 2020, there are to be 600 million surveillance cameras in all public spaces. Each step and every movement is to be recorded- and not just using surveillance cameras. Everything is transparent and visible. At train stations, people are not only being x‑rayed, they are openly being exposed by making public, for example, if taxes have been paid, traffic regulations flouted or environmental requirements abided by. Everything is being collected and evaluated in a central data bank known as the “Social Credit System”, a rating system based on data which is meant to influence the behaviour of people. Kai Strittmatter, Ranga Yogeshwar as well as Shoshana Zuboff point to an image of surveillance capitalism. Is another sort of ethics at play there? If so, is it because the form of the state is different? What are the reasons and what are the differences compared to Europe?
I think the difference you refer to is partly based on our Western history of philosophy as well as our Christian history which both led to a philosophical enlightenment emphasizing the significance of the individual. Even today a statement of Immanuel Kant which is over 250 years old can be seen as a summary of this enlightenment: “Habe Mut, dich deines eigenen Verstandes zu bedienen2 .“
If the individual has been accorded freedom as well as having his own intellect, together with a corresponding ability to judge, then this condition is for the time being considered to be a general one and is attached to the dignity of the person and is also independent of one’s intellect and social standing.
The behaviour of the individual is then confronted with what is considered to be valid within the group or within society. What “one does” does not necessarily have to conform with “what I do” but what I do must take into account its effect upon the community- individual action then bears a certain responsibility with regard to social and communal actions. This is particularly clear when we look at honesty in paying taxes or ecological accountability.
Moral action has to be plausible and justified to the individual and it has to come about in the form of a dialogue, taking into consideration the requirements and claims of society. That is what is meant by ethical discourse.
The actions of the individual have an effect on “the whole” (e.g., wearing a mask in the COVID-19 pandemic) and the whole has an effect on the individual (e.g., limiting rights of freedom in order to break infectious cycles). This must be settled in an ethical discourse.
Data protection laws vary all over the world and not every country has turned informational self-determination into a basic principle. On the other hand, data are crossing borders every second and travel all over the globe. Globalisation is part of our daily lives and the technology involved in data processing does not stop at the border. Should there be a general and uniform understanding in how we deal with data- a uniform ethic- so that projects, such as Big Data, Industry 4.0 or digital eco-systems can be achieved? Do we need a discourse and how would it develop in the face of no country wanting to be behind in terms of being competitive?
If technological and economic development can only take place and be viewed at a global level, then we actually need global ethics, which is to say an ethical dialogue to accompany these developments.
In antiquity the advent of trade had already led to a cultural exchange. Our terms “ethos” and “moral” go back to the words used by the Greeks and Romans to describe the behaviour of other peoples. Trade has since become a world-wide phenomenon, meaning there is already a global cultural exchange taking place. This in turn has led to the complex situation of communicating by means of the principles of ethical decisions, or legal logic, as well as including ethical and social standards in trade agreement contracts, as is now being attempted in the case of contracts concerning Brexit.
A reason for founding the World Ethos Institute in Tübingen 25 years ago has to do with the global development we have just described. A goal of the institute is to establish a prerequisite by which the ethical aspects of industrial developments are negotiable. This aim is to be achieved by joint learning processes and via a dialogue of the religions.
Which practices in terms of dealing with data will become standard? What is actively being done to find a consensus on how data is to be handled? Is an active debate on data handling currently taking place? Is the World Ethos Institute leading this discourse and what results or rather intermediary results exist? Are differences being ignored? Are the technological possibilities the only element to be setting the tone?
In the past, an ethical debate or an ethical discourse on technological, economic and political developments took place much later than the events themselves or only when undesirable consequences had made themselves noticeable. We must also take into account “the power of the situation at hand”: whatever is possible, will be done (On this, cf. the freedom of being able to do Good or Evil).
Taking all of this into consideration, it becomes obvious that an ethical discourse has to be held at the beginning of a development and not lag behind it. The lengthy debate on climate change and on ecologically responsible economic activity has shown that values, such as sustainability and ecological responsibility, have become established and have also led to economic concepts changing and these values being included. In the end, this must all surely mean that sustainable economics can be profitable.
The World Ethos Institute believes that global learning processes, to name one example, are necessary for politicians and those in management positions to communicate with one another on values, such as sustainability, ecological accountability, rights of freedom etc., and part of the reason for its founding included the belief that communication processes of this type require a dialogue among the various religions. This belief seems obvious since, from a historical point of view, religions shape our “mind set” and can possibly lead to creating peace.
China invests significantly more capital in digitalisation than other countries do. Owing to the size of their population, untold amounts of data are available to the Chinese, data which are essential for a digital future. China is ahead of the game in this digital revolution. Are these all prerequisites for deciding to process data Chinese-style at the global level?
Ethical decisions are very complex because they are often dilemma decisions, given that there is always a trade-off involving deciding whether or not some benefit is justified, if, at the same time, some sort of collateral damage is unavoidable. They have to be shaped in such way so that the benefit in proportion to the harm being done can still be recognized and desired by the majority. Alternatively, a benefit can only be achieved if some damage is accepted at the same time, which is often the case in medical ethical decisions. This trade-off has to be considered and accepted by a majority.
An economic benefit could hardly justify a decision on its own account if, at the same time, massive infringements upon freedom, self-determination, dignity etc. were to follow. One could also tie the necessity of an ethical discourse to the economic question: Can we afford to give up the concomitant discussion and consideration of ethical consequences of a certain development- or, if we waive the rights to an ethical reflection, will this renunciation simply lead to even greater economic or social harm in the fullness of time?
A dialogue, discourse as well as a consideration of a certain trade-off all require time in a democratic system. Do we have the time for such a significant process? Or will we be overtaken by different sorts of non- democratic countries as well as by other types of ethics because they can react more quickly. Speed, favoured by technology in digitalisation, is a significant factor. Will this characteristic be fatal to Europe, democracy, ethics and Christianity? Or will Europe find a way to lead such a discourse more quickly and to implement the results of a majority consensus in order not to lag behind in the global market?
It is always a novel argument to assert that economic benefits and ethical discourse do not cancel each other out or deter one another. On the contrary, they are even economically successful. Examples for this realisation can be found in ecological debates or in the concept of a social market economy: Owing to ecologically responsible behaviour, new products are created and, together with them, the concomitant markets. A social market economy always attempts to solve the ethical dilemma of wealth and an equitable distribution of it. Sustainability and social standards in production and trade (“supply chain law”) have become well-established and recognized values. Such developments give me hope that, before some irreversible disaster comes to pass in one area or another, our inherent common sense and ethical reflection will provide the necessary input. A discussion such as yours, Dr Caldarola, contributes to this process gaining the upper hand in time.
Thank you for your kind words. I look forward to your comments concerning my next Duets.
1 For the sake of brevity, the masculine form is used solely for the sake of simplicity.
2 Have the courage to use your own mind.