While the industrial revolution initiated the process of automating production lines, digitalisation and the concomitant appearance of Big Data, Industry 4.0, AI, and so forth will probably complete it. In your book – Das Seerosenprinzip1 – you have described in great detail how individual responsibility is disappearing with the advent of ever-increasing mechanisation. Should we worry that with greater automation via the digital revolution responsibility will vanish completely or perhaps even be handed over to robots?
In her Duet interview with Daniel Goeudevert, the French man of letters, automobile industry manager and business consultant, Dr Caldarola, author of the recent book Big Data and Law, discusses the meaning of responsibility in the digital age.
For the sake of our readers, let us go over some of the key takeaways from your book, Das Seerosenprinzip. With the advent of automated assembly lines, manufacturing a product was divided into numerous processing stages to increase efficiency (Taylorism). For this reason, the worker was responsible for his/her particular step in production, but not for the entire product. It follows then that s/he could and can no longer have an overview of the entire production process beginning with purchasing the raw materials, processing of individual elements to selling the final product because of the numerous stages involved. This naturally means that s/he only takes responsibility for what was done in his/her step, but not for manufacturing the whole product. The worker in question is then quite different from being part of a small artisanal workshop where the product in question is made by one person considering all aspects from purchasing the materials needed to the design, execution and sale of the product. You conclude that with the advent of mechanisation, individual responsibility is disappearing. Furthermore, you ask who bears the overall responsibility with regard to production concerning issues such as environmental protection, “fair trade” in the delivery chains, and treating employees, suppliers, customers and so on with the respect and fairness which they deserve. How do you view responsibility and what does it mean for you? Who do you think is responsible for these “higher” issues and can we ensure that someone takes responsibility for matters which serve us all?
Daniel Goedevert: When I think of the word responsibility, or the expression to bear responsibility, then what comes to mind is being responsible for a procedure, a process, an event or even for a person.
Responsibility always has a beginning and an end.
The result of the procedure, for which a person has taken responsibility, can be a positive one, and the person responsible is happy to take the credit for it. The procedure can, however, go badly and that same person has to accept responsibility for that result as well.
Responsibility is always linked to many expectations, such as, for example, what position you have, power, knowledge, income, influence among other things. These expectations are willingly accepted by the person responsible. There is, however, a downside to responsibility: the consequences should the process fail are just as gladly disavowed and handed over to someone else. It’s a complex issue.
Let us first consider responsibility with regard to technology because that is what Big Data is all about.
When considering your question concerning technology, responsibility and freedom (to decide), Martin Heidegger comes to mind. No other philosopher has considered the relationship between technology and humans so masterfully as Heidegger did.
Roughly eighty years ago he had already predicted what would transpire today. I can remember one of his statements where he described the danger of the developing technology was one of utility becoming “over-utility” and technology having its own lack of purpose as its very purpose.
It is this statement which is crucial to our discussion. This lack of purpose of technology, or stated differently, the momentum which technology generates on its own can only be controlled to a certain extent. This realisation is essential to our later understanding of responsibility and freedom.
The technology of today has been refined and developed to such an extent so that it can determine its own future. We can see this being played out in the advanced technology of Artificial Intelligence (AI), androids and robots, all of which can be designed to evolve further in its shape and thinking processes. Thinking is of course to be put in quotation marks here, since I do not believe that people can make another “being” think by means of technology or teach this being or robot what responsibility, freedom or other “soft” elements mean.
The issue of lack of purpose is important. Already in the 80s and 90s I had the feeling that the automobile was no longer fulfilling its original purpose, namely enabling the mobility of people. Instead, it was being used to nourish people’s vanity and desire for speed. In Heidegger’s terms, technology (the car) had progressed further on its own and was no longer true to its intention and nature, meaning the need people have to be mobile.
I fear people are susceptible to wanting to own a car or other product to portray themselves in a certain way and, therefore, the original intent of the product, enabling mobility, is little by little being lost and serving another area: self-promotion and ownership.
It is undoubtedly good to own a car. It of course has advantages and disadvantages. But nowadays ownership only has one advantage: projecting what I am or what I think I would like to be. Naturally this is true of other products as well, such as for mobile phones which have turned into status symbols.
By analogy, we can use this way of thinking with reference to the person responsible, such as the bosses of a company. They are also less concerned about their responsibility towards society, the environment, Fair Trade and employees but rather about advancing their own career, raising their salaries and increasing their influence and power.
Having determined what the aims are, we can then question what freedom and responsibility are. If I remain true to the original purpose – in this case, the goal of becoming mobile or the aim of ensuring the well-being of my co-workers, then I can profit from and enjoy a feeling of freedom.
If, however, I consider areas involving ownership or self-interest, then I find myself in a completely different region, one of dependency, to wit: owning money and property. It is true that I am responsible for the product that I own, in this case, an automobile. But here we must distinguish between the ethics of conviction and and the ethics of responsibility in accordance with the philosophy of Max Weber. At this point, the focus must be on the reasons why I own a certain thing and the responsibility associated with these reasons.
Need and expectation are then defined in accordance with the purpose in question.
When a company wishes to introduce an environmentally friendly product on the market, because oil is becoming scarce, and the company wishes to safeguard non-renewable energy sources, then we can say that a specific need is being addressed. The company must then walk a fine line between offering said product at a price that is neither too high nor too low. After all, a rise in prices is to be avoided because of dwindling oil reserves.
Addressing expectations is an entirely different matter., Here the company does not want to satisfy a particular need. In this case, it is all about the size and performance of a product: 6 cylinders are more than 4 and 8 are more than 6. If a company builds an 8‑cylinder vehicle instead of a 6‑cylinder, thus enabling the car to go 250 instead of 200 kph, then it has an edge over the competition. Accordingly, customers will prefer the faster car. From this perspective, the basest expectations of customers are met and not the noblest: The customer wants to show off, wants to drive fast and be a macho type etc. And addressing these expectations works because they are so simple.
One only has to figure out what the second-best on the market is up to. If he builds a car that weighs so much, drives so fast and costs so much, then the company in question need only add a plus to all these criteria. With that quick fix, we now have a product that offers more than the second-best on the market. It all works following the Max Planck principle: “Everything that can be measured is real”. These expectations can be provoked by fomenting and fulfilling the expectation in question: This car can attain x speed and will be fulfilled by sale of the product.
Those responsible will behave correspondingly: If s/he is only concerned about his/her career and income, then here too one would have to find out who is second on the market.
There is a fine line between what is being advertised and what is actually being offered. I get the potential customer excited and do this to satisfy his/her expectations. That is not allowed and is simply not done with regard to what is needed. When I play around with expectations, what happens is exactly what communications expert Edward Bernays had done. The purpose changes and technology experiences self-development without having a purpose.
The most important factor is not the product but rather its function or the need which has to be fulfilled. Big Data, digitalisation, as well as the many other technologies are merely a means to an end. Everyone is talking about the aim of digitalisation being how fast processes are occurring and how quickly they are spreading. For the most part, however, no one mentions what the specific need or purpose is.
Indeed, going back to Heidegger, it seems clear that the self-progression of technology, the speed with which we are being overwhelmed and its lack of purpose can no longer be controlled.
Either digitalisation will soon be upon us or we are right in the middle of it, meaning we are one step closer to total automation. People rely more and more on the requirements set by programs and machines and are losing their abilities to do various things. For example, people have forgotten how to do their sums because they have a calculator. Similarly, we will soon lose the ability to read a map because we rely increasingly on navigation systems, and nobody will bother learning this type of skill either. Machines are supposed to be learning how to manage more and more tasks, either in the form of AI, algorithms, robotics or similar, and will thus take them over from people and continue to learn independently. Do these machines, programs, robots etc. not make any mistakes? And if they do, who is responsible for these errors, because responsibility has always been attached to the subject and not to the object. In other words, with the progression of digitalisation, will responsibility vanish completely or become redundant because no or hardly any actual people will still be involved in production processes?
Will responsibility disappear with increasing digitalisation? The answer is NO- even if at the end of a fully automated process only one person is left standing. That person, even if s/he is called God, will be responsible.
Of course, the person responsible will look for his own form of Satan, as in all monotheistic religions, such as Christianity, Judaism and Islam, to be able to shift the blame for his/her failure to this “Satan”. Those who wrote the Bible or other religious texts also considered this issue, for if only one God, or possibly in the future a human, exists, then this one being cannot be responsible for Good and Evil. Hence, this divine being requires an opponent, if you will, Satan, who has always existed and has always been responsible for Evil.
A dual system, a head of Janus, has always been needed: one side representing Good and the other Evil, thus providing us with a being who is responsible for both Good and Evil. So long as many divinities existed, each of whom was assigned a variety of tasks, then Good and Evil could be distributed.
However, a prerequisite for responsibility is the belief in values: Good, Evil, political or human values among others. People have always been faced with the dilemma of having to choose between Good or Evil. Of course, whether machines, technology, Big Data, algorithms etc. represent Satan is an entirely different matter.
Every process or operation has a beginning and an ending. The process is started by an initiator, who gets the ball the rolling, and, at the end, we have someone responsible for the process, who places the product created by this process on the market.
Today, as before, it was always difficult for the person at the head of a company to have a detailed overview of the processes involved in production. S/he can and always could say or find out what had transpired during the course of a certain process and could thus determine who had carried it out or who had done something wrong. The person performing certain tasks during the process in question is the one who caused the event in question while the person in charge was and still is the person responsible.
In an era of Big Data and digitalisation, that person can no longer be said to be responsible with as much certainty. There might be a process which functions with machines, algorithms, AI, neuronal networks and, of course, it is obvious that the person in charge could no longer have a real overview of such a process. Indeed s/he is more likely to be waiting for a final result. The boss of the company can bear responsibility, if s/he says, I want to achieve a certain result, even if s/he cannot make anyone responsible for an intermediate process.
If you view the topic of “Big Data” from this angle, it is paradoxical but true: It becomes easier to determine who is responsible for it. It will of course be possible to analyse the intermediate steps of the process in question owing to automated documentation. Paradoxically, it will no longer be possible to identify a subject since the process will have been driven by machines which is to say objects. It is like a black hole where no one can really find an initiator.
To summarise, the head of a company no longer- and far less than before- needs to have an overview of the work processes in order to bear responsibility.
For this reason, determining responsibility will work in this way: Who began the process and who put the product on the market. And if I continue to the logical conclusion of this scenario, then managers of companies have more responsibility than ever- especially if it is no longer possible to find a subject for the intermediate steps. The person responsible can still say at the end of the process: I am not satisfied with the result and will not introduce it on the market, even if doing so means possibly losing his/her job. So, in this scenario, we can imagine the challenge of the head of a company only viewing his duty as satisfying shareholders. Thus, greed, extremism, bias, intemperance, competition, maximisation etc. will all become increasingly significant. In this case, we would need morality as a corrective measure to prevent economics and science from derailing.
Is automation and now digitalisation leading to a loss of capability, thinking and feeling, which is to say to a loss of innovation and values, seeing as machines do not have morality?
Yes. Here the key phrase of our conversation „Big Data “comes into play: A modern form of technology that multiplies information.
Does this make the situation better or worse or merely change it? Probably it will deteriorate it, and you are right to ask this question since Big Data will have an influence and will bring about change – probably for the worse.
I am reminded of T.S. Eliot’s thoughts concerning information:
„Where is the Life we have lost in living? Where is the Wisdom we have lost in knowledge? Where is the Knowledge we have lost in information?
That says everything and I would also add the following:
“Where is the Information we have lost in social networks und where is the sense of responsibility we have lost in Big Data?
Big Data will probably make everything worse because the processing of data and information is not transparent and cannot be understood by people, even if companies are legally obliged to provide us with incredibly long data protection notices. The processing, storage and analysis are hidden in gigantic devices, algorithms, plans, computers, apps etc. so that people do not have an overview or insight in these processes, even if every single stage of the co-worker or customer is automatically being registered or documented via machines or its sensors in this era of industry 4.0 or online shopping.
People see input and output so quickly that no one is capable of understanding the intermediate processing as quickly or gaining an overview of the entire process. I can only hope that this does not break free at some point, for, if it does, we will surely lose complete control.
Machines, algorithms, Big Data and so on are all not moral beings. They do not invent themselves of their own volition, meaning they do not have the capacity to invent. At the moment, they can only improve themselves independently. It is important in this context to stress that they do not determine what the goal is and do not initiate any goals. Rather, they only optimise the processes in question.
There will, therefore, always be a programmer or a person or persons who assume there is a goal when they create and plan an entire development by using Big Data and other technologies. There will always be people to initiate achieving a certain aim, even if there is only one person left owing to the high level of automation.
It is like in that wonderful book by Aldous Huxley, Brave New World: People were happy because they were constantly taking the drug Soma.
The Soma pills are a metaphor for advertising and social media of today: a world in which people are made submissive and needy in accordance with the theory of Edward Bernay by preying upon their expectations. In this way, it is conveyed to people which products they think they need. People, in this case the users of Facebook, Apple, Amazon etc. believe themselves to be free and free to decide but, in reality, they are dependent on the drug, which is in this scenario, social media. The Alpha Plus people from Huxley’s work are the programmers or the bosses of the various social medias.
The number of concomitant consequences is enormous. If a hacker, or let it be a programmer, and following Huxley, an Alpha Plus person, suddenly detects a small gap in the system, then the effect takes on a totally different dimension compared to before. This person can be there without being detected and is capable of shutting down an entire hospital in Düsseldorf or Munich, as recently seen in the news. And the situation can get much worse if this were to become the war of the future.
We consider ourselves to be well informed owing to analyses derived from Big Data or the social media but too much information kills information. We think we are not only well informed, but also keep ourselves up to date quickly and all the time because we use the internet, without realizing, however, that we have started to not use our own judgement. People confuse the amount of information with the ability to pass judgement on things, people or even a given state of affairs.
I think your questions are very timely. Particularly now during the COVID-19 pandemic, we see how this situation has come to pass: Everyone is providing information, commentaries and opinions concerning this matter. I have stopped following what the scientists have had to say long ago, be it the virologists, epidemiologists, physicians, emergency doctors etc. Each one has something different to say, each contradicts the other and they all think they know something about the subject. Furthermore, the politicians think they know as much as the scientists do.
For this reason, the film “Forschung, Fake und faule Tricks” (Research, Fake and rotten tricks) or “La fabrique de l’ignorance“ (the factory of ignorance) was shown on Arte at just the right time. In the movie, the topic of Agnotology was discussed, the doctrine of not knowing or of ignorance and how it is often being taught at university as a new discipline. There is a whole industry of science out there dedicated to spreading doubt concerning research results because they contradict commercial interests. There are even renowned Nobel prize winners who support certain directions or products because they are being lured in with research funds. There are lobbyists of established products who protect their sales markets and hinder new disruptive technologies and products to attain success.
We can observe how this tremendous amount of information is transmitted to an average person, and we are all average people in this regard. Information in these dimensions reaches even an average scientist. Indeed, an average scientist no longer knows what s/he is saying and what s/he has to say because so many research results are being carried out, publicised and disseminated which oppose one another or are contradictory.
People in positions of power used to determine goals, whether in science, the economy or elsewhere. Nowadays the economy and research develop on their own without any catalyst.
Science can be divided into pseudo-science, junk science and real science. Yet everything is become less comprehensible owing to the reproduction of an informational society, and it would be a coincidence if suddenly someone had a genuine moment of illumination.
If we all thought we knew everything simply because we had access to the internet, then this would be both a wonderful and a disastrous situation. We can see it in social media where each person think s/he can make the big bucks. In this area, it must be said that Mammon, money, profit and greed, has won.
It is amazing how influencers are able to collect millions of followers and make lots of money in a short amount of time simply because they suddenly post something and say something which is basically uninteresting: I bought this item and I liked that one. Suddenly they have a lot of followers and are billionaires. One of the most bizarre developments of this nature which occurred in the past few years is the Kardashian family, who haven’t really accomplished anything but have somehow earned billions.
A person can say something on the internet which can reach thousands of people and yet simultaneously hundreds of thousands are able to contradict him via Twitter or Facebook. How are we supposed to form our own opinion these days? Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information? Truer words have never been spoken.
Algorithms, AI and similar are all capable of finding correlations or the best, fastest, most durable or cheapest etc. In the end, are we only talking about quantitative parameters? What about qualitative aspects, ones that cannot be measured but which render life and labour human? In other words, are greed, bias, immoderation, ignorance, rivalry, maximisation among others being promoted because of digital instruments?
Correct. Evaluating data produces quantitative results while a person has the ability to yield both quantitative and qualitative aspects.
People believe in values such as religion, the Ten Commandants, the basic rights of 1948, the ethics of Kant and Kierkegaard, but, regardless of the values, it would be interesting to know if a machine could detect moral guidance or values if it read Kant or Kierkegaard.
You are correct when you state that quantitative parameters and thus baser feelings are being promoted. One of the best thoughts that I have ever encountered comes from John Steinbeck:
“…The things we admire in men, kindness and generosity, openness, honesty, understanding and feeling, are the concomitants of failure in our system. And those traits we detest, sharpness, greed, acquisitiveness, meanness, egotism and self-interest, are the traits of success. And while men admire the quality of the first, they love the produce of the second.” ― John Steinbeck
That is such a true observation of human nature and, at the same time, answers one of your questions: Where is goodness in all of this, where is responsibility and why do tender emotions get the short end of the stick when it comes to how we act?
Yes, I fear everything will become worse and one thing more must be said today: The so-called Mammon – money, profit, and greed – has definitely gotten the upper hand over all other gentle emotions or „soft factors“. If we look for the latter when considering the person responsible, be it responsible for developing a particular type of technology or technique, or the proprietor or owner of a product, then these soft emotions, aspects or qualities can either not be or hardly be found.
The reason is that the people who are responsible for developing a product or managing a company only have one goal in mind: increasing the capital of a company or their own wages and satisfying their own shareholders, all of which cannot happen without profit.
This is also reflected by statements made by banks who stated about thirty years ago that they were only interested in shareholder value and a 24% net gain, but not the rest. This “rest” is just waste. If you look at those types of statements, then you really cannot be speaking about responsibility and ethics.
Steinbeck’s observation shows us that people admire goodness but, at the same time, admire the produce of bad and evil behaviour and wish to attain it. Since machines, Big Data, algorithms etc. are permanently optimising gains, always working faster, increasing and improving productivity and, at the end of the process in question, increasing profits, then these means promote the “Mammon” of people: Greed, extremism, prejudice, intemperance, ignorance, rivalry, maximisation etc.
An earlier interview with Thomas Leyener made clear that part of a person’s personality included the freedom to make an ethical or moral decision. The occidental view of being a person encompasses the freedom of a person, and this very freedom also means being free to do Good or Evil. Thus, if a person must be free in order to do good, then a person will surely realise that this liberty also means that a person is free to do the opposite of good, that is, evil. This implies, therefore, that this freedom to do good cannot be separated from the responsibility we have to avoid evil. This correlation makes even clearer that freedom and responsibility correspond with one another and cannot be conceived in isolation from one another. Whoever wishes to act as a free individual, is thus required by necessity to take responsibility for his/her actions. If we follow this argument to its logical conclusion then does this not mean that people will become less free with increasing levels of automation and digitalisation, especially if they are set as equals to these non-human entities?
Yes, the radius of freedom is becoming more and more constricted. Since machines favour quantitative aspects and, in this way, “Mammon”, the path of freedom to decide between good and evil will become that much narrower. With the advent of automation, information overload, social media, lack of transparency and Agnotology, it will become ever more difficult for a person to figure out what is required from him/her or what s/he can decide on his/her own.
Do you recall the new products from the financial sector, the derivatives? Many have tried to understand this product and even most banking consultants have not been able to explain what these products really are. They just had no clue. Nevertheless, these derivatives are still being offered. In this way, we can see that the more complicated the procedure is, the needier a person becomes and thence his/her freedom to make decisions will become more restricted. Furthermore, with the means stemming from today’s media (the Soma pills), people will be persuaded to participate.
The ethical challenge concerning automated management processes involves finding the right proportion between the benefit from heightened security or avoidance of mistakes owing to automation and yet avoiding giving the impression that one could surrender one’s responsibility to a non-human entity. What would the “golden mean” look like?
It goes without saying that a non-human entity, mechanical or algorithmic entity does not have any values, at least not by today’s technical standards. People have values, and if tomorrow a non-living entity had values, then it would be comparable with God, who has given people values. Of course, people would rather give their cares over to a non-human entity, a machine, but the machine is only a means, and not Satan. Compare this to the question: Who is responsible for killing a person? The knife or the human agent? The knife, just as with a machine, cannot be held responsible; only a person can.
If Big Data or algorithms or a machine are being used, which is really just a business development plan, then these tools process the business plan by a complete usage of logic. That is to say, it is still a person who is responsible for the business plan, as mentioned earlier. The person in question determines which problem is to be solved and if the solution is to be introduced on to the market.
God is responsible for having created humans and, in the end, God is also responsible for what he has done. Goethe expressed it best in Faust, when Mephisto says: „Ich bin Teil jener Kraft, die stets das Böse will und stets da Gute schafft.“2
This is a brilliant thought because what the quotation says is that regardless of what we do, something can always happen which will lead us to something different and we cannot place responsibility in the interim on machines. Only a person, the subject, is responsible.
The question remains: can human freedom exist with the requisite responsibility, for with complex technological development, is such freedom even possible for any individual? Especially since no individual can possibly understand automated management processes because of systems which are continuously improving and learning as well as owing to other processes which are running in the background, and, given this situation, no individual could ever manage evaluating the process in question from an ethical perspective. In this way, an ethical dilemma has come to pass: Being responsible and yet, simply stated, not precisely knowing what it is I am doing. How would you approach this dilemma?
Let us consider the television which sends us gruesome moving imagines, for example, the rape of a child, right into our living rooms. Technicians are able to understand how these images are transmitted to our houses, but not most people.
In this situation, we can speak of several people bearing responsibility: The manufacturer and seller of the television, but also the person responsible for the content as well as the viewer, who is watching such gruesome content or has decided to have a television in his/her living room.
In between all of us, there is a huge and complex process. At the beginning and end there is the person responsible who has the freedom to act differently and to decide. I will not, however, be able to remove myself completely from the process because the content in question must be considered as well.
It will become even more difficult to avoid certain types of media, the Soma pills. If I do not own a television, smartphone or I cannot be reached via email or WhatsApp, then life will become even more difficult. The pressure to take Soma drugs is growing, and alternatives can scarcely be found.
Soma eco-systems are being created, in which one signs up in only three clicks, such as with Amazon, but which are far more difficult to disengage oneself from or change to another “eco-system”. And this separation process will certainly be made difficult by one of the people responsible. The withdrawal from Soma drugs needs time and energy, and the people responsible for the “eco-systems” in question are competing for every customer.
It is thus apparent how limited the freedom to decide and bear responsibility is.
Processes are becoming more and more complicated and require more and more multi-disciplinary skills, and, if we think that freedom and responsibility are dependent on one another, then can this condition even be fulfilled given the complexity of the processes in question? The only remedy is trust. Who is responsible for trust? Governments, health insurance, tax authorities, companies such as Google, Amazon, Facebook etc.?
Yes, the world becomes a more and more complicated place, which is less transparent and less comprehensible. “Having trust” as a solution? Trust is very close to responsibility.
There is an element, however, in trust that I don’t like: The element of blindness. Trust will become an alibi for not being able to explain a process or operation.
Trust is a highly individual matter because it involves what a person believes in without knowing what it is.
I am reminded of a quotation by Hans Jonas: „Handele so, dass die Wirkungen Deiner Handlungen verträglich sind mit der Permanenz echten menschlichen Lebens auf Erden“. This sentence is a perfect union of these matters: trust, responsibility and their repercussions, behaviour, belief, values – it is all there.
I am also reminded of the Little Prince by Saint Exupéry: When the prince puts his faith in the fox and the fox tells the prince: “People have forgotten this truth. But you mustn’t forget it. You become responsible forever for what you’ve tamed. “
So, the closer we work together, the more human and greater the responsibility is that we bear for one another. That is something a machine cannot do.
You also asked who bears responsibility for trust. With the advent of digitalisation, we can certainly observe a distortion of power from governments all the way to the private sector which is like a tectonic shifting of responsibility.
It used to be that governments and their administrations were responsible for the well-being of the people, with either a Maria Theresa or a Hitler at the head of state.
Today Google, Amazon, Facebook, Alibaba and others are highly influential, but they don’t give a damn what one government might want or what constitutional fiduciary duties the governments in question might have. They satisfy expectation and are themselves driven by Mammon.
Even if so many awful, untrue things, indeed, so many downright lies are being conveyed in the social media, those responsible for the social media in question are quick to affirm that everyone has the right to do what s/he wants.
GAFAM freed themselves long ago from the constraints of big governments. They consider themselves to be independent and are part of a world where Mammon is the goal. The means which they employ is the stoking and steering of expectations, the propaganda as per Edward Bernay or Aldous Huxley’s Soma drug, which turns people into needy addicts. In this way, people will be less free, and the radius of freedom will become smaller than ever.
And people as well as governments are not even aware that their sphere of freedom is shrinking in every dimension. Honoré de Balzac might call it „La peau de chagrin “(the skin of disappointment). Governments can only determine the space that business has left them. They no longer make decisions, because politicians look to their re-elections, their own advancement, their power and influence and pay little regard to the needs of citizens.
Indeed, the Brave New World which Huxley already described in the 1930s is upon us and we have become part of this development for far longer than we realise. In an interview in the 1960s, Huxley was asked if he believed his vision would develop in that way. To this question, he replied that he had prophesied this development for 2034. He only got the date wrong because the development had already started in the 1960s and was proceeding in accordance with his book.
This beautiful new world has come to pass faster than we could have imagined. It is surely no coincidence that Huxley mentioned Henry Ford in connection with the introduction of Taylorism (sequencing of work).
Each sequence is optimised to perfection and beyond until production and productivity proceed optimally. Responsibility for the entire process cannot be Taylorised because it is simply not possible. Of course, one could attempt it, but the person responsible would lose his/her credibility if s/he were to tell his/her employees s/he were not responsible for anything because everything was running so perfectly due to Taylorization and automation.
Mr Goeudevert, thank you for sharing your insights on the meaning of responsibility in the digital age.
Thank you, Dr Caldarola, and I look forward to reading your upcoming interviews with recognized experts, delving even deeper into this fascinating topic.
1„The Water Lily Principle”
2 “I am part of that power which always wants evil and always creates what is good.”
3 “Act so that the effects of your actions are compatible with the permanence of true human life on earth.”