Standards are a way to reduce risks, improve efficiency and optimize key operations. Companies trying to create stand-alone solutions and monopolies tend to rely increasingly on standards, unity, streamlining and compliance. Does Big Data help when it comes to assimilation, standardisation, harmonisation and egalitarianism and thus increasing efficiency?
Dr Caldarola chats with journalist Dr Hajo Schumacher, author of the book “Kein Netz – wie wir unser wirkliches Leben zurückgewinnen” , on the risks and opportunities of excessive standardisation in the digital age.
As a result of the Industrial Revolution many processes were simplified and partially automated while traditional crafts and individual and tailor-made work became much less significant. In this way, products could be sold more cheaply but still be of higher quality. Now we can observe a similar development taking place with digitalisation. A variety of services are being standardised by interacting with an app, a questionnaire or a frame- thus showing us that here too automation is progressing. Machines are doing the work of humans in these areas as well while individuals claims and personal consulting are becoming less important. Examples for this trend are LegalTech, FinTech, online orders, online money transfers etc. What does standardisation mean for us as individuals? Security, certainty in planning, predictability…? Simply stated, is there still a place for individuality, creativity, human interaction, custom-made products and, finally, what about people themselves or employees?
Dr Hajo Schumacher: Here we can see various types of logic coming together: On the one hand, we have the desire of the customer to be treated as an individual together with a desire for custom-made products. That wish still exists – but it costs a lot of money, whether we are talking about furniture, travel, clothes or cars.
A completely different type of logic rules over low prices. In this regard, tech providers and customers as a group are fairly eye-to-eye on this one: It has to be cheap and convenient. Standardisation makes for bargains, even when products have to be delivered, as we have seen with Amazon and Lieferando, to name two examples.
Let us consider the financial services provider who has been after me for months because s/he really wants to invest my money for me. The theory is a simple one: A software invests the money in funds which have been doing well. Another software figures out what sort of investor I am. A personal meeting is only considered necessary if I as the customer am still hesitant. And then the human touch would come into play: The ability of the financial consultant to charm and manipulate me and possibly persuade me. These properties are costly, meaning an algorithm tells the company how many times it is worth trying before resorting to actual human intervention.
In this way, even the human factor has become electronically quantified. Both sides can thus enjoy the advantage of digitalisation: The financial institution in question has a higher margin because personnel costs have been optimized while I as the customer pay lower fees owing to robotic investments and possibly lower risks, as if the bank employees had their eye on the commission.
But, unfortunately, there are some very different stories. We know, for example, that in the US waiting times on customer hotlines are being optimized by AI. This means that when I call a telecommunications company in order to complain about an incorrect bill, I will get the message that approximately 34 customers are in front of me and I will have to wait at least a half an hour.
Is it really like that? I have no idea but really the point is figuring out at what moment precisely I will get tired of waiting and give up. At the same time, I will be told to have a look in the FAQ to see if the solution to my problem can be found there. Of course, it can’t. Companies are really interested in hiring as little customer service personnel as possible because they cost a lot of money
AI is in a position to calculate when I will give up but, at the same time, knows where my personal breaking point is: When do I get so fed up that I swear on the life of my grandmother that I will never ever turn to that company again? Every person has their own individual breaking point, but an algorithm can calculate when it will become relevant for the firm in question. It is never about offering the best customer service, but about receiving the most acceptable yet worst service possible. Really, the main issue is about maximizing profit. If you want to understand digitalisation, you actually only need to understand capitalism.
As part of many job application processes, a number of application portals and pre-configured frames are being used when one wants to submit one’s application. But now personality profiles of applicants are to be created with the help of AI by using short videos. If an applicant changes their outfit from a blouse and a blazer to a t‑shirt, changes their hairstyle, or takes their glasses off, to name but a few examples, then the results of the evaluation of the candidate’s personality changes as well. Will individualists or especially intelligent people be overlooked? This would mean the very people needed for completely automated processes, digitalisation and so on might not be hired. Are these sorts of processes even compatible with our notion of human dignity?
Here is a counter-question: Are job interviews ever compatible with human dignity? Yuval Noah Harari says: Our digitalised life is a life-long job interview. The fact that application parameters change with the clothes you wear merely demonstrates how badly software of this type is being programmed and how pathetic the criteria of the potential employer are. All the biases you can think of concerning men wearing suits and women wearing pearls seem to have been incorporated into these standards. Have fun with that! Well, we can’t blame that on the technology but instead on the programmers who all seem to suffer from the same bias problems, regardless of where they come from. This means that the software merely reflects all the things that have been going wrong at the HR management level. If we believe in a higher justice, then all the companies who use this type of software will get exactly the sort of employees they deserve- which will hopefully become noticeable when it comes to the success of their business.
Why are employers relying on evaluations generated by machines, or rather software, and not on their own judgement? Is it a question of responsibility and, if it is, are we sacrificing our freedom by giving up/avoiding our responsibility?
First of all, I would contradict the thesis that people have a big problem giving up their freedom. Well, as long as we are talking about showy liberties, like having two SUVs, going on three cruises and looking down on minorities without restraint, people fight for their supposed rights in these cases. But, at the very least by the time we look at freedom of speech, there we see the limits to freedom: First let’s cancel culture. As soon as responsibility comes into play or we need to fight for our convictions, then that usually leaves most people by the wayside. Which brings us back to the original subject. The supposedly objective nature of software is really practical because the manager of personnel can simply give up all sense of responsibility and hand it over to technology. Ideally, we even mystify the programme a little to make it ultra-objective and superior to people in every respect. Once this is done, I don’t have to answer for any of my decisions anymore.
In this regard, we are also experiencing what we know concerning the polar opposites: cheap versus custom-made. As soon as we are dealing with costly expert staff, companies are willing to hire expensive personnel consultants. If, however, we are talking about mass jobs, which are easily interchangeable, a standardised software will take over the whole selection process, including all the red tape, meal tickets and invitation to the Christmas party. And why? Because it’s cheap.
Do we need an average person for digitalisation to work or for digital systems to work? Or will a selection process leading to a certain standard take place owing to the greater reliance on digital systems?
It probably works the other way round. Digitalisation is creating the average person. AI might calculate that an airplane passenger is allowed to carry 2.43 kg of hand luggage, so that as much luggage as possible has to be transported at additional cost. People might then start weighing their bags with a jeweller’s scale.
Sometimes you can also observe reciprocal effects if you look at Instagram photos, for example: People post motifs to receive as many likes as possible. Then the software in question will tell me that food, sunsets, and cars go down especially well. And what do you find on Instagram? Plates full of food, beaches at sunset and people in front of cars. Is there anything less original than that? Or activity apps. Since some random person decided 10 000 steps was the goal we should all be aspiring to, people all over the world are now looking at their displays and feel bad if they only managed 9876 steps in one day.
Technology creates norms which are based on human behaviour, which in turn determines human behaviour. It’s an interaction which only has one goal in the end: Controlling behaviour so that it can be monetarised. You haven’t managed your 10 000 steps? Well then try getting these amazing shoes which were recently worn for some world record. Or this particular protein powder. Or a more generous app.
The goal of digitalisation is to turn people in consuming machines, who can order every bit of rubbish they want 24⁄7 without leaving the comfort of their homes, and with the appropriate loan to match. It can hardly be a coincidence that all digital concerns are forcing their way into banking. Those are the closest treasures to be appropriated.
If so much is being simplified or standardised, what will happen to social behaviour in this context?
Social behaviour is not a primary goal of the algorithms in question – at least not in Western society. The big tech companies are configured so that they make as much money as possible by using as much automation as possible: Google and Facebook have appropriated the advertising markets for themselves while Amazon has taken over trade and, of course, we have Netflix, Spotify, Airbnb and other companies in other domains. Social behaviour is thus being reduced, if we continue using this commercial logic, to sharing, which is really recommending, or obtaining products for free. Look, Dr Caldarola has been watching this TV series and wouldn’t this show work for you as well? And you’ll get a 20% discount.
In China, however, social behaviour is being steered in a more insidious way by means of the Social Credit System. If you brake at the crosswalk, you’ll get some points added to your score, but if you speak against the political party, that will definitely mean points taken away. If you happen to be a Uighur, then you’ll be in a point deficit for your whole life, simply by virtue of the fact that you are a Uighur. It’s particularly treacherous if you meet someone who doesn’t have many points because then you’ll have points taken away just for that. Merely talking to a Uighur could cost you your vacation. In this way, an entire society is being segregated, not in a geographic or social sense, but by being placed in digital ghettos. In fact, this system is highly effective, at least for those in power. For who decides what social behaviour is desirable if not the Communist Party?
The TV series “Black Mirror” illustrated these developments in a particularly clever fashion. But let’s go back to the topic of freedom: If a Social Credit System were to be offered in Europe which had been adapted to Western characteristics, and the system promised to lower the crime rate by 50%, how many people would choose freedom and how many would opt for the Chinese version of security? In the end, the surveillance capitalism of Silicon Valley and Chinese surveillance socialism start to resemble one another to a disturbing degree.
Robots, algorithms, AI are supposed to replace people and to make production more effective and more efficient within the framework of Industry 4.0. They all use Big Data to find correlations, to develop algorithms, to train staff and to enable AI. Big Data can find out quantitative aspects, such as the fastest, greatest and the most often and so on and so forth. But what is happening to qualitative aspects, such as Western values, human dignity, a person’s ambition, ethics, solidarity, privacy, defining your own goals, emotions, strategies? Or does the digital age no longer need such things anymore because everyone is the same or will be the same?
The values of the Occident are not of much concern to software developers and their advocates. I recently read an interview with a game programmer, who wished to remain anonymous, and he was quite honest about what is important as far as digitalisation goes: revenues, of course.
Anyone who works in the games sector is well aware that computer games are addictive because the players are constantly being deluged with dopamine. Dopamine is pretty much as addictive as nicotine, but you can’t forbid it because your own body produces it.
It’s an amazing business model: Dealing is legal, and the junkies are more or less social individuals, if you ignore the temper tantrums exhibited by children and adults as soon as you try to stop them from playing- in other words, a classic case of withdrawal symptoms.
Every second which a customer spends on his or her computer translates into a veritable mine of information: Which obstacles annoy players, where and with which figures do they like to spend time with, how much money are they prepared to spend on special features, at which point do they stop playing? The goal of every digital product, whether it be Fortnite, TikTok, Netflix or Facebook is to keep the customer hooked for as long as possible.
And you’re talking about dignity, morality, solidarity, privacy. Doesn’t that make you feel nostalgic for those old-fashioned values?
Innovation thrives on mistakes and learning from these mistakes. Children fall down and try to get up again and then keep on practicing until they have figured it out. However, from the moment children begin going to school, they are taught to make as few mistakes as possible. Failing is judged negatively, even though it is possible to learn something new, something better from it. Innovation is the most significant, if not the only tool to increase our GDP. Why do we forego this process? Or is standardisation the new innovation?
Standardisation combines market and planned economies. It does away with competition and only favours innovation which is of use to one’s own advancement.
Amazon has succeeded to do what communism could never manage: Namely, to take control of a market and all its processes and, at the same time, it has done away with the mechanisms of a free market.
All the parameters in question, from the needs of customers to the weather during transportation, from storage to the mood of the delivery person, they have all been factored in. A small-scale supplier has to be part of Amazon and be squeezed for a 30% commission. In return, Amazon knows all my trade secrets, costs, returns. If my product is in high demand, Amazon can underbid me anytime and curtail my online presence. The future is being determined by platforms, so-called proprietary markets, which have cancelled competition in the classic sense of the word. There is a lot of innovation taking place, but only if it serves the Amazon cartel.
Robert Musil wrote in his unfinished novel, The Man without Qualities, “Completely fine is as if it were the ruin of all progress and pleasure.” Digitalisation is bringing us order, uniformity, standards, consistency… Will this be the ruin of humans, the loss of our human diversity- a process which, at the same time, is taking place quite differently if we look at the current drop in species in the plant and animal world?
At this point, I would like to say something in favour of standardisation. When the industrial world power Great Britain was still printing “Made in Germany” on German products in order to discredit things coming from the potato state, it was precisely the standard nature of German production that brought an end to British hegemony and favoured the rise of Germany becoming an industrial nation. For, all of sudden, nuts and bolts and screws all had to fit because they had all been standardised. If we are talking about mass production and precision, then having a certain norm is a blessing. How standardising human behaviour works, is something we are trying out as we speak.
In your opinion, will “digital progress” continue to gain ground, without any consideration for the concomitant losses, while adhering to the motto: Whatever can be done, will be done?
I think the real question here is: Do countries still have the power to set themselves against global players like Google?
Let’s assume Google wants to prevent a politician who is lobbying hard for heavy taxation of digital companies. Can we exclude the possibility that this politician will now find that his name is being listed in connection with problematic search items, that negative stories, in particular, are at the top of the list, that one day certain details from his life will possibly come to light.
What would happen if someone put together a list of all of your search queries from the last twenty years without any sort of comment and then published it? What if they include your movement profile, credit card bills, telephone conversations, chats- and let’s not leave out our press which is very hungry for sensationalistic news.
Who is so naive as to think that a commercial business would not manipulate its search algorithm if its image or stock market price were at risk?
Google is in a position to destroy every single person in the world and every public figure knows that. Does a country have the power to fight that, does the chancellor have the courage to go to war against digital war machines? Have fun with that thought. The battle has already been lost if we do not soon use the Rockefeller solution: Crush it. And do it thoroughly. Economic monopolies are bad enough, but information monopolies are much more dangerous.
Do algorithms, AI and robots ask critical questions and are they taking over our capacity for thought and innovation? Can we depend on them? Are we saying: “Don’t worry, this is just a normal evolutionary process at work”? In other words, people on this planet have always been on the look-out for extra-terrestrial beings. Are we creating this fantasy world for ourselves through digitalisation, avatars, digital twins, robots … and is that going to represent our new innovation and creativity?
If we have learned one thing in the past years it is that the harmless nutcases who were fittingly called evangelists and who created some random fantasies at the behest of data companies which had to do with eternal life and outer space stations and all the other science fiction stuff which, by the way, we have always been happy to hear about ever since Jules Verne, all of it served one purpose: These stories were meant to provide ruthless and exceedingly unremarkable money machines with a meta-plane, a higher meaning, something pseudo-religious.
The fact is that the so-called alphabet company, which supposedly promotes medicine, gene sequencing, modern transport and a variety of other trendy topics, is actually just an empty shell being fuelled from a single pipeline which happens to be the huge amount of cash being earned by the search engine known as Google with its global monopoly.
It’s a similar situation with Facebook. These companies are not exactly gifts from the gods. Rather, they work at a very trivial level, just like BigTobacco, BigAlcohol, BigPetrol, BigSugar or BigOpioid: They merely make you addicted to their product, defend their monopoly by all available means, bribe research so it is in their favour, make money as long as politics allows this to happen, leave the ensuing damage to society, but, by all means, let’s donate a few museums or libraries so as to make the company sound nicer for the generations to come.
I do have one final thought concerning, as you put it, “our normal evolutionary process”: The internet is like climate change: We know what awaits us. And it won’t go away if we simply ignore it. We know all the mechanisms of addiction, the empty promises, the tricks of influencers, the helplessness of politics. However, when people prefer to stare at their displays instead of looking at people’s faces, then we have to face the fact that this development is quickly becoming reality. But is anyone really going to call this disturbing trend “progress” and, more to the point, what are we progressing to?
My opinion is:
A good life is still possible in digital times
Dr Hajo Schumacher
Dr Schumacher, thank you so much for sharing your opinion, your thoughts and your view on standardization versus individuality.
Thank you, Dr Caldarola, and I look forward to reading your upcoming interviews with recognized experts, delving even deeper into this fascinating topic.