Big data and the auton­o­my of think­ing – how Big Data is prof­it­ing from our distraction

Prof. Dr. Thomas Metzinger 

Is there such a thing as men­tal cap­i­tal­ism where the dig­i­tal trans­for­ma­tion of infor­ma­tion and enter­tain­ment tends to make them worth­less? Is mind­ful con­scious­ness the new sought-after resource, and is there an area where this focused atten­tion can become a cur­ren­cy that can be active­ly cul­ti­vat­ed? If so, what chal­lenges do we face and how can digi­ti­sa­tion increase our intel­lec­tu­al autonomy?

In the lat­est of her Duet inter­views, Dr Cal­daro­la, author of Big Data and Law, and Prof. Dr Thomas Met­zinger, the well-known philoso­pher and aca­d­e­m­ic, dis­cuss whether dig­i­tal sov­er­eign­ty and men­tal auton­o­my are related.

You have men­tioned sev­er­al times that the most lim­it­ed resource today is not water or food, but human mind­ful con­scious­ness. Can you explain that idea to our readers?

Prof. Dr Thomas Met­zinger: Of course, there are many peo­ple in poor coun­tries for whom water and food are the most lim­it­ed resources and it is equal­ly obvi­ous that these peo­ple would much rather have water that does­n’t cause dis­ease. Those of us who live in rich­er coun­tries are often not that aware of this type of sit­u­a­tion. We only become con­scious of this fact when we trav­el to coun­tries where we can­not safe­ly drink tap water, for instance. Safe and clean water is not some­thing to be tak­en for grant­ed by many peo­ple on this planet.

For those of us liv­ing in more pros­per­ous coun­tries as we do, where we have enough water and food, I think the most lim­it­ed resource is con­scious mind­ful­ness. We are over­whelmed by stim­uli and infor­ma­tion. As a result of digi­ti­sa­tion, we now have an indus­try in rich coun­tries that devel­ops its busi­ness mod­els using these stim­uli and infor­ma­tion, which has start­ed to sys­tem­at­i­cal­ly mon­e­tise this atten­tion. We call this an extrac­tive econ­o­my because this very con­scious­ness is being extract­ed from the human brain and then processed for their respec­tive busi­ness mod­els, e.g., for per­son­alised advertising.

The prob­lem is that with­in each indi­vid­ual human being and brain, the abil­i­ty to be awake, focussed, and pay atten­tion is a lim­it­ed resource. Babies hard­ly have this capac­i­ty at all. Old­er peo­ple or peo­ple with demen­tia also lose this abil­i­ty with the pas­sage of time.

By alert­ness, con­cen­tra­tion, and atten­tion I mean the abil­i­ty to focus one’s atten­tion on an object on pur­pose and to keep it there. For exam­ple, draw­ing atten­tion to the book in my hand or to what my inter­locu­tor is say­ing and keep­ing this atten­tion focused in the face of all kinds of dis­tract­ing stim­uli. We need this abil­i­ty or resource. It is only avail­able to us humans for a cer­tain amount of time dur­ing the day. Then we become tired and are eas­i­ly distracted.

There are two types of con­scious­ness: auto­mat­ic con­scious­ness and vol­un­tary con­scious­ness. And it’s the lat­ter that I’m con­cerned about. We have these nei­ther in deep sleep nor in dreams (the excep­tion being the very rare lucid dreams).

Dur­ing the day our minds wan­der, and we spend 30 – 50% of our time away from what we are actu­al­ly doing. So, if we actu­al­ly intend­ed to pre­pare a meal and to cook and enjoy it with guid­ed aware­ness, then our thoughts would wan­der again and again while we were doing it, e.g., think­ing about the next task, men­tal­ly plan­ning our vaca­tion etc. This means we are con­stant­ly being dis­tract­ed and do not remain in focus – here in our exam­ple cook­ing and eat­ing. The key­words here are “spon­ta­neous task-unre­lat­ed thought” and “mind wan­der­ing”. It’s the seem­ing­ly aim­less mind, the day­dream­ing, the unbid­den mem­o­ries, and the auto­mat­ic plan­ning. It’s about what I call “men­tal sleep­walk­ing”, that is, the per­ma­nent occur­rence of seem­ing­ly spon­ta­neous, task-inde­pen­dent thoughts, the loss of atten­tion con­trol and dis­trac­tions repeat­ed hun­dreds of times a day. Much like dol­phins breach­ing the sur­face of the water, thoughts often cross the line between con­scious and uncon­scious infor­ma­tion pro­cess­ing, in both directions.

This resource of con­scious­ness is crit­i­cal for our qual­i­ty of life. It is not only sig­nif­i­cant dur­ing work­ing hours so that we can do our work. This resource is also impor­tant out­side of work­ing hours – to be able to lis­ten to friends, fam­i­ly and acquain­tances, to be able to empathise, to be able to real­ly expe­ri­ence nature on a walk in the for­est and not to con­stant­ly wan­der off in your mind or to be able to real­ly enjoy and per­ceive the food you are con­sum­ing. This applies to many oth­er areas of life as well. Today there is stiff com­pe­ti­tion for our con­scious­ness: an employ­er and friends, fam­i­ly, acquain­tances and so on; they all want or need our atten­tion and con­cen­tra­tion. The abil­i­ty to pay atten­tion and con­cen­trate are clear pre­dic­tors for school, uni­ver­si­ty and pro­fes­sion­al suc­cess, as has been seen in numer­ous studies.

My opin­ion is:

The dig­i­tal trans­for­ma­tion tends to make infor­ma­tion and enter­tain­ment worth­less.
We are fac­ing a sys­tem of men­tal cap­i­tal­ism because mind­ful con­scious­ness has become a 
lim­it­ed resource.
Mind­ful con­scious­ness has become a cur­ren­cy that can and will be active­ly collected.

Prof. Dr Thomas Metzinger

If there is an indus­try for extract­ing con­scious­ness, a process which is now being done bet­ter and more effec­tive­ly with dig­i­tal tools, then we are talk­ing about the exploita­tion of this scarce resource of mind­ful con­scious­ness. We can’t see this bat­tle for our mind­ful­ness very clear­ly. We only notice the dwin­dling of our qual­i­ty of life, with­out know­ing what is actu­al­ly caus­ing it. The con­se­quences are depres­sion, burnouts, emo­tion­al exhaus­tion… This phe­nom­e­non could be com­pared to a legal­ly reg­u­lat­ed or even imposed blood dona­tion. If a lit­tle bit of blood is drawn from us again and again and sys­tem­at­i­cal­ly, then that also has seri­ous con­se­quences for our health.

This con­tin­u­ous, sys­tem­at­ic, and indus­tri­al drain­ing of our con­scious­ness has nev­er before exist­ed in the his­to­ry of our evo­lu­tion. And that is a key prob­lem today because the price of infor­ma­tion is falling, con­scious­ness is being sys­tem­at­i­cal­ly mon­e­tised and the psy­cho­log­i­cal costs are externalised.

We col­lect infor­ma­tion and knowl­edge and this can be dig­i­tal­ly retrieved and received world­wide at any time. We have dig­i­tal tools and there­fore no longer need skills such as men­tal arith­metic because the dig­i­tal tools do it for us at the push of a but­ton. Do we still need mem­o­ry, con­scious­ness, autonomous think­ing these days and, if so, why?

In the­o­ry, there is noth­ing wrong with machines tak­ing over many func­tions. Just like our mus­cles, our brain works accord­ing to the prin­ci­ple “use it or lose it”. Abil­i­ties and resources are deplet­ed when they are no longer being used.

Have you ever asked stu­dents in your home­town for direc­tions? They can usu­al­ly no longer tell you how to get to the town hall because they no longer need the abil­i­ty to nav­i­gate spa­tial­ly due to the exis­tance of pro­grammes or apps on smart­phones. That’s all fine as long as we have good apps, the satel­lites aren’t bro­ken… We can’t do some­thing on our own any­more if these apps no longer work.

On the one hand, tech­nol­o­gy relieves us of a lot of work and cre­ates space for some­thing new. On the oth­er hand, it also cre­ates depen­den­cy. What’s bad about the sec­ond fact is that when we become depen­dent, we lose self-con­trol and men­tal auton­o­my. When we are depen­dent on dig­i­tal tools and media, they also “tell” us where to focus our atten­tion. And that becomes dan­ger­ous because then we are no longer inde­pen­dent peo­ple who are able to decide and act freely.

Par­tic­u­lar­ly dan­ger­ous is the loss of respon­si­ble cit­i­zens, mean­ing crit­i­cal social sub­jects who form their own opin­ions in a polit­i­cal com­mu­ni­ty and can thus recog­nise griev­ances, address them and work out pro­pos­als for solutions.

Of course, as AI takes over human work, employ­ers need few­er and few­er skilled work­ers. But even if AI increas­es pro­duc­tiv­i­ty with less man­pow­er, we still need peo­ple to buy and con­sume these prod­ucts designed, engi­neered, and man­u­fac­tured by AI. There­fore, giv­en the cur­rent eco­nom­ic mod­el, we def­i­nite­ly need well-informed con­sumer groups.

And even if there was an uncon­di­tion­al basic income for “super­flu­ous” employ­ees because of the scarci­ty of work done by peo­ple which is still being need­ed, com­pa­nies will nat­u­ral­ly also want to skim off the uncon­di­tion­al mon­ey through their busi­ness mod­els and turn it into cor­po­rate profit.

And it is pre­cise­ly at this point that we can see that com­pa­nies would pre­fer to have those cus­tomers they can con­trol with their dig­i­tal tools – the con­sumer becomes a prod­uct via the extrac­tive atten­tion econ­o­my. Crit­i­cal ratio­nal think­ing and moral sub­jec­tiv­i­ty have not yet become the object of tech­no­log­i­cal process­es, but they would be extreme­ly rel­e­vant for the resilience of our democracy.

It is of course not the goal of com­pa­nies to work towards a com­mon good. That is the task of the state and pol­i­tics. It only becomes prob­lem­at­ic when com­pa­nies under­mine those very tasks of the state because they are more sophis­ti­cat­ed and faster and exploit the state’s lack of knowl­edge regard­ing the busi­ness mod­el used and the tech­nol­o­gy for extract­ing atten­tion. Most of the time, the state notices the changes only grad­u­al­ly, for bureau­cra­cy and the nec­es­sary demo­c­ra­t­ic debates and votes are usu­al­ly stum­bling blocks until “cor­rec­tive” and delayed leg­is­la­tion comes into force.

There is also some­thing like legal pos­i­tivism, where, for exam­ple, high­ly paid rep­re­sen­ta­tives of the defence indus­try – also called lob­by­ists – empha­sise that they would abide by the laws if they were changed and that they also want new laws to be enact­ed. They demand legal cer­tain­ty from the state because for them there is only the law. This is very clever but also disin­gen­u­ous because, in real­i­ty, they have a deeply uneth­i­cal agen­da and so they can com­plete­ly side­step the eth­i­cal debate. Reg­u­la­tion and legal com­pli­ance are there­fore basi­cal­ly part of the prod­uct or mar­ket­ing strategy.

The respec­tive busi­ness mod­els are part of an evo­lu­tion­ary process. They take part in an eco­nom­ic com­pe­ti­tion, are con­stant­ly being test­ed and used for legal loop­holes or an eth­i­cal vac­u­um. Per­haps there will soon be a type of AI that can find these gaps.

Com­pa­nies which actu­al­ly have ide­al­ism as part of their cor­po­rate pol­i­cy- such as the GLS Bank – try to organ­ise “clean” cap­i­tal flows and use their prof­its towards the com­mon good – i.e., to offer an alter­na­tive finan­cial indus­try. How­ev­er, com­pared to the vol­ume of con­ven­tion­al banks, they are a very small indus­try indeed. There are many indi­vid­u­als or sole pro­pri­etor­ships try­ing to act in a way that is ori­ent­ed towards the com­mon good. But in the age of glob­alised preda­to­ry cap­i­tal­ism, this is becom­ing more and more dif­fi­cult because there is also the mat­ter of effi­cien­cy – one has to admit this quite freely – for exam­ple with a view to China.

I sup­pose the big tech giants (Face­book, Ama­zon…) have realised that they require the mind­ful­ness of their users and are tar­get­ing human con­scious­ness on their plat­forms by using algo­rithms that are improv­ing every day. How do the tech giants attract con­scious­ness to their com­pa­nies and how suc­cess­ful are they at keep­ing it?

It is the algo­rithms that con­tin­u­al­ly improve and learn from expe­ri­ence. These algo­rithms take advan­tage of the weak­ness­es of the human mind. These are, for exam­ple, our evo­lu­tion­ary char­ac­ter­is­tics, such as attract­ing atten­tion through out­rage or releas­ing ener­gy through cre­at­ing and insult­ing trib­al iden­ti­ties (in-group / out-group behav­iour). For instance, if a group is offered an iden­ti­fi­ca­tion which means they are true patri­ots and anoth­er group is cre­at­ed to oppose that iden­ti­fi­ca­tion – even if it is only a sim­u­lat­ed group – then a strug­gle ensues between the groups and ener­gy is released. Trib­al­ism is the feel­ing of belong­ing to a cer­tain group, the phe­nom­e­nol­o­gy of iden­ti­fi­ca­tion can be tech­ni­cal­ly stim­u­lat­ed or attacked. The trick to attract­ing atten­tion is to cre­ate a plat­form for peo­ple to fight, swear, and insult one anoth­er. Anoth­er way is to offer the pos­si­bil­i­ty of not hav­ing to use real names on these plat­forms, but rather pseu­do­nyms. This low­ers the inhi­bi­tion thresh­old to take part and even a “los­er” can final­ly unabashed­ly insult celebri­ties and enjoy an illu­sion of pub­lic­i­ty and agency. Insult orgies can take place from morn­ing to night- mean­while our focus on those things can be con­tin­u­ous­ly extract­ed and monetised.

Of course, there are also dis­ori­ent­ed peo­ple, who no longer under­stand digi­ti­sa­tion, glob­al­i­sa­tion, the com­plex­i­ty of the world and are look­ing for “affin­i­ty groups”. Most of the time, these groups are closed to those who think dif­fer­ent­ly to ensure sta­bil­i­ty and sup­port from dis­ori­en­ta­tion and fear. This is called silo­ing. Silo­ing can also be organ­ised and relies on trib­al affil­i­a­tion. We do know this:  Faced with an exter­nal threat, cohe­sion and sol­i­dar­i­ty in the respec­tive group increas­es. If these threats dis­ap­pear, social behav­iour and altru­ism in the respec­tive group also dis­ap­pear. This is real­ly bad news because it means that espe­cial­ly in times of peace, frag­men­ta­tion into sub­groups increases.

Anoth­er phe­nom­e­non is “virtue sig­nalling”. This means that peo­ple want to show off on social media — they want to show how vir­tu­ous they are, dis­play­ing moral val­ues ​​cou­pled with hoped-for approval. That includes how much they donate, that they’re veg­an… – which is basi­cal­ly all about moral self-mar­ket­ing. Peo­ple use all lev­els of social media to adver­tise them­selves as a prod­uct: state­ments of sol­i­dar­i­ty and self-por­tray­al (e.g., nude pho­tos and appar­ent pro­fes­sion­al suc­cess). An illu­sion of self-agency is cre­at­ed because peo­ple get the feel­ing of being seen and hav­ing an impact – which is exact­ly the feel­ing that is cre­at­ed by busi­ness mod­els of reach­ing and being noticed by many peo­ple. In real­i­ty, we are all over­whelmed by stim­uli and infor­ma­tion and can­not digest that much infor­ma­tion. Hard­ly any­one is look­ing. The device used here is that peo­ple are giv­en the feel­ing of com­mu­ni­cat­ing and net­work­ing, which as a mat­ter of fact only rarely exists. This also explains the effect of very active rad­i­cals seem­ing to be par­tic­u­lar­ly vis­i­ble to us, but who, in real­i­ty, only rep­re­sent a small minor­i­ty because the many silent or inac­tive peo­ple are more or less invis­i­ble on social media.

Con­tin­u­ing with this aspect, let us con­sid­er the evo­lu­tion of pri­mates and their groom­ing. Ani­mals get togeth­er and pick the lice out of their friend’s fur in places that they can­not reach with their arms or mouth. This is how these ani­mals form rela­tion­ships. This is exact­ly what is hap­pen­ing now on social media. The likes are noth­ing more than caress­es at a dis­tance and give the illu­sion of real delous­ing or a dec­la­ra­tion of friend­ship. It’s also no coin­ci­dence that this cul­ture of likes comes from Amer­i­ca because most Amer­i­cans don’t real­ly know what friend­ship is – at least by Euro­pean stan­dards. In the USA “every­body is your friend” and of course this is also con­stant­ly being rein­forced. The vir­tu­al sim­u­la­tion of friend­ship and gen­uine social rela­tion­ships is a social insti­tu­tion there. The “friends” and “likes”, the “hearts” and the “fol­low­ers” are a huge illu­sion, a form of social hal­lu­ci­na­tion that many peo­ple fall for because their need for friend­ship and social con­nec­tion is so great, and that need is thus being sat­is­fied in this way. Social media are hal­lu­ci­na­tion machines where pleas­ant sto­ries are told, where peo­ple no longer know whether the sto­ries are by real peo­ple or avatars, and whether these tales are real or fab­ri­cat­ed. The offers we get are decep­tive­ly real and are get­ting bet­ter and bet­ter, the users are simul­ta­ne­ous­ly prod­uct and con­tent in one. This is also shown by the recent mes­sage about Google employ­ee Blake Lemoine. He was sus­pend­ed because he believed the chat­bot LaM­DA had devel­oped its own con­scious­ness. Mr Lemoine demand­ed that LaM­DA first “agree” to be sub­ject­ed to fur­ther exper­i­ments. He had imag­ined a social rela­tion­ship – and that will hap­pen more often in the future.

We can hal­lu­ci­nate friend­ship or insult online; the com­pu­ta­tion­al goal of the algo­rithms is “max­i­mum com­mit­ment”. The econ­o­my of mind­ful­ness is unfold­ing on social media, a form of men­tal cap­i­tal­ism is tak­ing place where our con­cen­tra­tion is sys­tem­at­i­cal­ly being extract­ed and mon­e­tised with self-opti­mis­ing algo­rithms. What we are deal­ing with is AI-based sur­veil­lance capitalism.

Why are the tech giants so suc­cess­ful? Is it because we no longer under­stand the com­plex­i­ty of digi­ti­sa­tion and glob­al­i­sa­tion? Or because we are afraid of uncer­tain­ty? Are we con­se­quent­ly head­ing towards rad­i­cal­ism and cre­at­ing breed­ing grounds for tech giants to achieve their goals? If so, what are their objec­tives exactly?

There are many dif­fer­ent aspects to this issue. One aspect is that our envi­ron­ment is becom­ing too com­plex for many who then long for sim­ple mod­els and expla­na­tions. This pro­motes con­spir­a­cy the­o­ries and pop­ulism on a polit­i­cal level.

Anoth­er aspect is that the new dig­i­tal envi­ron­ments are stress­ful for us with­out under­stand­ing why we find them stress­ful. An exam­ple is that most peo­ple have had the feel­ing of con­stant accel­er­a­tion for sev­er­al years. Every­one longs for decel­er­a­tion and even talks about it, but only very few man­age it. And even few­er under­stand the rea­son for the stress and the acceleration.

Many notice that they are exhaust­ed in some way with­out being able to iden­ti­fy the count­less dif­fer­ent fac­tors caus­ing it, this emo­tion­al exhaus­tion. Peo­ple no longer want to hear about the pan­dem­ic, Brex­it or the war in the Ukraine. Many just want these unpleas­ant events to “dis­ap­pear”. Recent­ly I heard some­one say­ing: “I would like to lie down and go to sleep in the evening and wake up the next day to find every­thing solved”. Polit­i­cal com­mit­ment does not go any fur­ther than hav­ing this wish ful­filled. We are expe­ri­enc­ing an emo­tion­al over­load that makes us sus­cep­ti­ble to sim­ple, beau­ti­ful emo­tion­al expe­ri­ences and addic­tive forms of dis­trac­tion. The beau­ti­ful expe­ri­ences are offered by the hal­lu­ci­na­tion machines and the sim­ple expla­na­tions by the pop­ulists, the reli­gious and the rad­i­cals. Among oth­er things, this also leads to the cre­ation of clear images which are meant to rep­re­sent the enemy.

We need a cer­tain lev­el of trust for social inter­ac­tion. If deep fakes and fake news, inter­net fraud, phish­ing emails, bots etc. become so good that trust is sys­tem­at­i­cal­ly under­mined and too many doubts are spread, then a basic mis­trust aris­es and with it an epis­temic cri­sis. An epis­temic cri­sis is caused by a large num­ber of peo­ple who no longer believe many things on prin­ci­ple and believe that there are not any­more cred­i­ble news sources or “real” experts at all. Just look at the train­ing of our jour­nal­ists, who nowa­days are taught they have to show at least one oppos­ing opin­ion in every sto­ry – which erodes into a sort of “sham bal­ance”. In the end, this cri­sis goes to the very foun­da­tions of our social cohe­sion and erodes our basic demo­c­ra­t­ic accord.

You ask about goals? As already men­tioned, it is about extract­ing the most lim­it­ed resource, name­ly mind­ful atten­tion. This has become a busi­ness model.

The tech giants have suc­ceed­ed in their aims because the state has failed to reg­u­late or enact laws to deal with their actions. There was and is a vac­u­um or legal vac­u­um for this new busi­ness mod­el and its asso­ci­at­ed tech­nol­o­gy. The tech giants also have a pow­er­ful busi­ness lob­by that rep­re­sents their con­cerns and helps to con­tin­ue to ren­der our liv­ing envi­ron­ment more technological.

What will result from the extrac­tion of human mind­ful­ness as a new busi­ness mod­el? Will we all think in the same way? Or expressed dif­fer­ent­ly, are we expe­ri­enc­ing a “dis­pos­ses­sion of think­ing”? Or to put it yet anoth­er way, is our mind­ful con­scious­ness being drawn to the goals of the tech giants in such a way that we lose the abil­i­ty to think for our­selves? Are peo­ple becom­ing a prod­uct of the tech giants?

In fact, peo­ple always become prod­ucts if they do not pay for the dig­i­tal ser­vice or dig­i­tal prod­uct they are using. In this sce­nario, data becomes a cryp­tocur­ren­cy or a medi­um of exchange for the dig­i­tal goods and ser­vices offered by the com­pa­ny. In addi­tion, peo­ple are being sys­tem­at­i­cal­ly made more and more pre­dictable in their role as consumers

The big tech giants come main­ly from the US and Chi­na. When they cap­ture the atten­tion of their users, they inevitably tap into the brain struc­tures of our chil­dren grow­ing up with social media. What do they want to achieve and what can Europe do about it with its lack of dig­i­tal sovereignty?

That’s right, dig­i­tal sov­er­eign­ty – espe­cial­ly for our crit­i­cal infra­struc­tures – is actu­al­ly in the US and Chi­na because the big tech giants have their head­quar­ters there. Since AI is also from these coun­tries, we will see a “de-democ­ra­ti­sa­tion of AI” on social media. There is an asym­met­ric struc­tur­al depen­den­cy that is already expressed, for exam­ple, in the dom­i­nance of Amer­i­can secret ser­vices (an exam­ple being the NSA scandal).

In the­o­ry, what we can do about the exploita­tion of our mind­ful aware­ness based on for­eign busi­ness mod­els is to cre­ate alter­na­tive infra­struc­tures geared towards the com­mon good.

As we learn more about how our con­scious­ness works in neu­ro­bi­o­log­i­cal terms, our abil­i­ty to influ­ence it in a tar­get­ed man­ner also increas­es. It is very impor­tant for us to under­stand that there is this very com­plex asym­met­ri­cal struc­tur­al depen­den­cy because dig­i­tal sov­er­eign­ty and men­tal auton­o­my are relat­ed to one another.

We should be ask­ing our­selves a deep­er and more gen­er­al ques­tion: name­ly, what type of con­scious­ness we want to pro­mote social­ly and which one we should shun? The prob­lem of the cor­rect “cul­ture of con­scious­ness” is already affect­ing chil­dren. For exam­ple, chil­dren could learn med­i­ta­tion tech­niques in order to be able to con­trol the lev­el of their mind­ful con­scious­ness them­selves – so that they can bet­ter resist manip­u­la­tion in media environments.

It is impor­tant to start a social debate on our – at least Euro­pean – con­scious­ness ethics. We need to think about ques­tions such as which types of con­scious­ness should be ille­gal in our soci­ety? Which ones do we want to pro­mote, cul­ti­vate and inte­grate into our soci­ety? What states of con­scious­ness would we be allowed to impose on ani­mals or machines? What types of con­scious­ness do we want to show our chil­dren? In what state of con­scious­ness do we want to die?

A com­plete­ly new and very sig­nif­i­cant ques­tion in this con­text is: How can digi­ti­sa­tion increase our intel­lec­tu­al autonomy?

Lack of or reduced human con­scious­ness low­ers our qual­i­ty of life and increas­es addic­tions. Where are we headed?

Yes, the extrac­tion of mind­ful con­scious­ness low­ers qual­i­ty of life, increas­es addic­tions, and has health con­se­quences. Here are a few exam­ples of men­tal health in the dig­i­tal age:

One exam­ple is the increase in media addic­tion among young peo­ple dur­ing the pan­dem­ic. The num­ber of patho­log­i­cal com­put­er gamers rose by 52 per­cent. Chil­dren and young peo­ple (par­tic­u­lar­ly girls between the ages of 10 and 14) are par­tic­u­lar­ly sus­cep­ti­ble to ridicule, mal­ice and bul­ly­ing. The Inter­net offers new forms and ways of bul­ly­ing because ridicule, ill-will and harass­ment are omnipresent and reach a large audi­ence through social media, so that chil­dren and young peo­ple sim­ply feel pow­er­less when it comes to bul­ly­ing in a dig­i­tal envi­ron­ment. As a result, sui­ci­dal thoughts are three times more com­mon among vic­tims of cyberbullying.

Anoth­er exam­ple is dig­i­tal porno­graph­ic use, which begins as ear­ly as age 14, with first expo­sures hap­pen­ing at the age of 8. The psy­cho­log­i­cal devel­op­ment and sex­u­al social­i­sa­tion of young peo­ple have shift­ed to the online world. Some of the con­se­quences might include: ear­ly sex­u­al ini­ti­a­tion, deal­ing with mul­ti­ple and/or casu­al part­ners, mim­ic­k­ing risky sex­u­al behav­iours, online sex­u­al vic­tim­i­sa­tion, assim­i­la­tion of dis­tort­ed gen­der roles, body image dys­func­tions, aggres­sive­ness, anx­ious or depres­sive symp­toms, hyper­sex­u­alised behav­iour – to name but a few.

Screen-based media use has struc­tur­al effects on brain devel­op­ment, includ­ing where lin­guis­tic abil­i­ties take place. Preschool chil­dren with more than an hour of dai­ly screen time show sig­nif­i­cant devel­op­men­tal delays in the brain regions respon­si­ble for lan­guage acqui­si­tion and lan­guage comprehension.

This series of ini­tial empir­i­cal results could be eas­i­ly extrap­o­lat­ed: The use of elec­tron­ic media in the evening and at night cor­re­lates with sleep dis­or­ders and depres­sive symp­toms. Long-term stud­ies show that social media use is a sig­nif­i­cant pre­dic­tor of depression.

Stud­ies also show that more than 50% of Amer­i­can high school stu­dents can­not dis­tin­guish com­mer­cials from real news. They go by the rule: “If it’s viral, it must be true”. Fake news spreads 6 times faster online than real news. Anger is the emo­tion that spreads the fastest on social media. Peo­ple don’t share infor­ma­tion pri­mar­i­ly based on truth or accu­ra­cy. Today the core moti­va­tion is to increase sta­tus and pop­u­lar­i­ty and to build and sta­bilise an appar­ent “cir­cle of friends”.

It is inter­est­ing to note that the mere phys­i­cal pres­ence of a switched-off smart­phone reduces cog­ni­tive capac­i­ty. Work­ing mem­o­ry and prob­lem-solv­ing capac­i­ties are reduced when the phone is in the same room. The sheer phys­i­cal pres­ence of dig­i­tal tech­nol­o­gy reduces emo­tion­al close­ness, feel­ings of con­nect­ed­ness and pro­motes shal­low and mean­ing­less con­ver­sa­tion. There­fore, social media use favours neu­rot­ic behav­iour while reduc­ing hon­esty and humil­i­ty. Accord­ing to stud­ies, Face­book addic­tion reduces brain vol­ume – sim­i­lar to some forms of sub­stance addiction.

In an ear­li­er inter­view, you said that human mind­ful­ness is a cru­cial pre­req­ui­site for the com­mon good, espe­cial­ly in times of social and polit­i­cal cri­sis – such as we are cur­rent­ly expe­ri­enc­ing with the pan­dem­ic or the Ukraine war – and also in the con­text of accel­er­at­ing glob­al trans­for­ma­tion process­es which are being dri­ven by digi­ti­sa­tion. Indeed, this con­scious aware­ness increas­es demo­c­ra­t­ic resilience and inter­nal cohe­sion of soci­ety as a whole. What could dig­i­tal empa­thy, dig­i­tal sol­i­dar­i­ty and cohe­sion look like in order to keep democ­ra­cy alive in this dig­i­tal age (home office, web shops…)?  Where are we head­ing since stud­ies have shown that democ­ra­cy has been in decline since 2009 and dig­i­tal tools are char­ac­terised by stan­dard­i­s­a­tion, automa­tion, prob­a­bil­i­ty, and pat­terns but few emotions?

That’s exact­ly the ques­tion I have been ask­ing myself! Per­haps I should start by say­ing some­thing about autonomy.

The term “auton­o­my” goes back to the time of the Greek city-states, i.e., indi­vid­ual cities had the right to impose laws on them­selves. A sim­i­lar fig­ure of thought is also found lat­er in Kant, where intel­lec­tu­al auton­o­my does not con­sist in doing what we want, but in being able to impose rea­son­able rules on our­selves, accord­ing to which we think and act.

The nomoi are the laws and men­tal auton­o­my is the abil­i­ty for men­tal self-con­trol. The lat­ter has two com­po­nents. One is the one I men­tioned ear­li­er, being able to con­trol the focus of one’s atten­tion, i.e., direct­ing atten­tion to a focal point and keep­ing that atten­tion there. There is research that shows that just one glass of wine sig­nif­i­cant­ly increas­es day­dream­ing or men­tal drift­ing. It’s the same with many oth­er drugs. Of course, there are also drugs (such as modafinil) that help increase focus.

The sec­ond com­po­nent con­sists of what we are think­ing in the first place. This is about cog­ni­tive self-con­trol, con­trol­ling your own thoughts. For exam­ple, if we want to cal­cu­late what 2 plus 2 is, then we must arrive at the result 4 and prop­er­ly go through a series of men­tal mod­els. Cats, for exam­ple, have a high lev­el of visu­al and audi­to­ry aware­ness when lis­ten­ing and watch­ing, but they are unlike­ly to be able to think as we do or have more com­plex sym­bol­ic chains of thought.

Both abil­i­ties – i.e., con­trol­ling atten­tion and thoughts – are eth­i­cal­ly and legal­ly impor­tant and wor­thy of pro­tec­tion and should be avail­able to most cit­i­zens ‑in the best-case sce­nario- in the inter­est of demo­c­ra­t­ic insti­tu­tions and the com­mon good and should be sys­tem­at­i­cal­ly pro­mot­ed. We need them not only for our qual­i­ty of life, but also for the gross nation­al prod­uct (i.e., an effi­cient work­force). If we burn our­selves out through exces­sive use of social media in our free time, then our per­for­mance at work is also reduced. Then work­ers start to cheat their employ­ers out of work­ing hours by quick­ly book­ing pri­vate flights at work, tweet­ing again and again, check­ing the news com­pul­sive­ly over and over again, or doing oth­er activ­i­ties. That is a loss of man­pow­er in the work­ing world. How­ev­er, the abil­i­ty for men­tal self-con­trol is also impor­tant for a suc­cess­ful life, for main­tain­ing men­tal health. The state, a resilient democ­ra­cy, needs peo­ple who can think clear­ly and stay con­scious­ly in the moment.

By extract­ing mind­ful con­scious­ness with the help of new dig­i­tal busi­ness mod­els, there is, so to speak, a com­pe­ti­tion between democ­ra­cy and the tech com­pa­nies. Democ­ra­cy needs respon­si­ble cit­i­zens with all the intel­lec­tu­al qual­i­ties I have men­tioned. The tech com­pa­nies also want to access these qual­i­ties and to mon­e­tise them. There is, there­fore, a kind of tug of war going on for the intel­lec­tu­al resources of the cit­i­zens. A well-for­ti­fied and resilient democ­ra­cy must defend the intel­lec­tu­al auton­o­my of its cit­i­zens against com­pa­nies that want to mon­e­tise them.

Of course, the actions of the tech com­pa­nies could be legal­ly restrict­ed by sim­ply ban­ning cer­tain prac­tices by com­pa­nies or entire plat­forms. But there are always new loop­holes and the busi­ness lob­by has long since infil­trat­ed polit­i­cal institutions

An econ­o­my based on con­scious aware­ness is mon­etis­ing AI-based men­tal auton­o­my. The busi­ness mod­els behind this sys­tem can­not be con­trolled by eth­i­cal guide­lines or inef­fec­tive Euro­pean reg­u­la­tions. Open-source dig­i­tal tools such as Mastodon or Lin­ux, which are organ­ised in a delo­calised fash­ion and are no longer tied to an extrac­tive busi­ness mod­el, would be bet­ter. But that’s eas­i­er said than done since as we know all too well it’s not a sim­ple mat­ter to expro­pri­ate or effec­tive­ly reg­u­late the tech companies.

We are wit­ness­ing a pow­er shift from the world’s gov­ern­ments to the CEOs of the tech giants. The ques­tion here is whether the accu­mu­la­tion of pow­er is already so enor­mous that it can no longer be reversed. Per­haps we have already passed the point of no return and towards the end of the cen­tu­ry nation states will no longer be the deci­sive fac­tors in decid­ing glob­al order.

Tap­ping into and tak­ing pos­ses­sion of our very minds is cer­tain­ly much more dras­tic than how Russ­ian gas has encroached upon soci­ety. The busi­ness lob­by with its short-term prof­it inter­ests and the gov­ern­men­t’s unsuit­able and inef­fec­tive design of the mar­ket has made us tremen­dous­ly depen­dent on Russ­ian gas- as we are now begin­ning to realise. Per­haps by using this com­par­i­son I have been able to make clear that oth­er tech­nolo­gies – like the new extrac­tion econ­o­my for mind­ful aware­ness – are plac­ing us in a dan­ger­ous sit­u­a­tion – and espe­cial­ly under Amer­i­can and Chi­nese con­trol- because these coun­tries cur­rent­ly have the tech­no­log­i­cal upper hand as far as dig­i­tal tools are con­cerned. Dig­i­tal sov­er­eign­ty and demo­c­ra­t­ic resilience are also relat­ed to men­tal auton­o­my, and we there­fore need more than mere tech­no­log­i­cal sovereignty.

Prof. Dr Met­zinger, thank you for shar­ing your insights on the rela­tion­ship between dig­i­tal sov­er­eign­ty and men­tal autonomy.

Thank you, Dr Cal­daro­la, and I look for­ward to read­ing your upcom­ing inter­views with recog­nised experts, delv­ing even deep­er into this fas­ci­nat­ing topic.

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Prof. Dr. Thomas Metzinger

Thomas Metzinger was Full Professor of Theoretical Philosophy at the Johannes Gutenberg-Universität Mainz until 2019. He was also president of the German Cognitive Science Society (2005-2007) and of the Association for the Scientific Study of Consciousness (2009-2011). As of 2011, he is an Adjunct Fellow at the Frankfurt Institute for Advanced Studies, a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and, on the advisory board of the Giordano Bruno Foundation. From 2008 to 2009 he served as a Fellow at the Berlin Institute for Advanced Study; from 2014 to 2019 he was a Fellow at the Gutenberg Research College; from 2019 to 2021 he was awarded a Senior-Forschungsprofessur by the Ministry of Science, Education and Culture. From 2018 to 2020 Metzinger worked as a member of the European Commission’s High-Level Expert Group on Artificial Intelligence.

In the English language, Prof. Metzinger has edited two collections on consciousness (Conscious Experience, Imprint Academic, 1995; Neural Correlates of Consciousness, MIT Press, 2000) and published one major scientific monograph (Being No One – The Self-Model Theory of Subjectivity, MIT Press, 2003). In 2009, he also published a popular book, which is meant to address a wider audience and discusses the ethical, cultural and social consequences of consciousness research (The Ego Tunnel – The Science of the Mind and the Myth of the Self). Important recent Open Access collections are Open MIND at (2015, with Jennifer Windt), Philosophy and Predictive Processing at (2017, with Wanja Wiese), and Radical Disruptions of Self-Consciousness (2020, with Raphaël Millière).

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.