Does Big Data offer con­struc­tive solutions?

D
Prof. Dr Maren Urn­er – Pho­to: Lea Franke

Big Data is a quan­ti­ta­tive tool that mea­sures and depicts cor­re­la­tions, com­par­isons and fre­quen­cies. Does it pro­vide us with solu­tions, or does it mere­ly serve to indi­cate where a solu­tion might be found, owing to its eval­u­a­tion. How can Big Data serve as a con­struc­tive solu­tion, as well as aid­ing and sup­port­ing the human brain?

In her Duet inter­view with neu­ro­sci­en­tist and best sell­ing author Prof. Maren Urn­er, Dr Cal­daro­la, author of the recent book Big Data and Law, talks with her on the pos­si­bil­i­ties of con­struc­tive crit­i­cism and news cov­er­age in rela­tion to Big Data.

Owing to the inter­net and social media, peo­ple are con­front­ed with an ever-increas­ing quan­ti­ty of infor­ma­tion. This infor­ma­tion is nei­ther fil­tered, nor struc­tured and processed, and any­one who has some­thing to say- regard­less of whether any research has been done or whether it is top­i­cal or not or whether it is even inter­est­ing or not- what that per­son says is pro­mul­gat­ed with great speed and enjoys a large range of read­ers. As with so many things, there are pos­i­tive (e.g. cen­sor­ing is made more dif­fi­cult) as well as neg­a­tive (flood of infor­ma­tion, fake news etc.) aspects to this devel­op­ment. Will Big Data turn into a tool to help process, struc­ture and ver­i­fy the sheer quan­ti­ty of infor­ma­tion at hand? If it does, what will this devel­op­ment look like?

Prof. Maren Urn­er: Nev­er before in his­to­ry have we expe­ri­enced such a dra­mat­ic increase in infor­ma­tion – or data. Even if we tried to “con­sume” infor­ma­tion 247, we would nev­er be fin­ished. There­fore, we have to use fil­ters that help us to decide what we con­sume. The media, for exam­ple, is one of the most impor­tant fil­ters. Obvi­ous­ly, the fil­ters we use are not always con­scious­ly applied but are a selec­tion of a giv­en set that is pro­vid­ed by cer­tain struc­tures and fac­tors, rang­ing from the tech­no­log­i­cal infra­struc­ture to social and polit­i­cal val­ues. How­ev­er, at a high­er lev­el, these mech­a­nisms are not con­ferred upon us by nature but rather are deter­mined by us as humans. For this rea­son, the ques­tion whether Big Data, AI and tech­no­log­i­cal progress will help us to sort, struc­ture and ver­i­fy the vast amount of new infor­ma­tion that is gen­er­at­ed on a dai­ly basis depends on how we pro­gram and use them. Some­times we have to remind our­selves, as well as politi­cians and oth­er deci­sion-mak­ers, that each type of tech­nol­o­gy has been devel­oped by humans and must, there­fore, be based on cer­tain val­ues and goals. Giv­en this under­stand­ing and to answer the ques­tion more direct­ly: Big Data cer­tain­ly has the poten­tial to help us struc­ture and under­stand the over­whelm­ing amount of new infor­ma­tion pro­duced by humans on a dai­ly basis. It is our choice to do so.

Law is one of the most inter­est­ing fields when it comes to the appli­ca­tion of Big Data and AI because its very essence is to find “the truth”. Long before any tech­nol­o­gy was first used in a court case, psy­chol­o­gists and oth­er avid observers knew that human per­cep­tion is not objec­tive or – in oth­er words – not always “the truth”. On the con­trary, every human obser­va­tion is always an inter­pre­ta­tion based on phys­i­cal input at a sen­so­ry lev­el inter­act­ing with pri­or expe­ri­ences. Wit­ness tes­ti­mo­ni­als are there­fore nev­er objec­tive nor nec­es­sar­i­ly true. Clever lawyers have known and used the so-called cog­ni­tive bias­es the human brain comes with for a long time, e.g. by ask­ing sug­ges­tive ques­tions and pre­sent­ing cer­tain infor­ma­tion while with­hold­ing oth­er types. Now, we are at a point in time where the truth-find­ing process of law can be made more accu­rate by using tech­nol­o­gy in a clever way. Cru­cial­ly, the most impor­tant word in the pre­vi­ous sen­tence is “clever”. Because whether Big Data, algo­rithms and pro­grams helps lawyers and cit­i­zens to find the truth, will great­ly depend on what we as humans and soci­eties val­ue and con­sid­er just. It lies in our hands and minds whether we let tech­nol­o­gy cor­rupt human judge­ment by spread­ing fake news, ampli­fy­ing group think­ing and incen­tivis­ing the most excit­ing con­tent or whether we use it in a way to make the flaws in human judg­ment more vis­i­ble, there­by poten­tial­ly avoid­ing injus­tice. In oth­er words, we have a his­tor­i­cal oppor­tu­ni­ty to advance law by using Big Data instead of let­ting it blur human judge­ment even fur­ther. In order to do so we need sound reg­u­la­tions and pro­gres­sive think­ing by the indi­vid­u­als in charge.

Indeed, the field of Agno­tol­ogy is both fas­ci­nat­ing and dis­turb­ing at the same time. We could even con­sid­er it as iron­ic that we now have a sci­en­tif­ic dis­ci­pline that inves­ti­gates the delib­er­ate prop­a­ga­tion of igno­rance and doubt about sci­ence as such. But if we look behind the cur­tain and con­sid­er the very ori­gin of Agno­tol­ogy, it becomes clear that this is not about sci­ence, progress or truth-seek­ing but about mon­ey. Robert Proc­tor, the sci­ence his­to­ri­an from Stan­ford Uni­ver­si­ty who coined the term, first observed the wil­ful spread of con­fu­sion and deceit by the tobac­co indus­try in 1979. A decade-old paper writ­ten by the Brown & Williamson tobac­co com­pa­ny reveals the psy­cho­log­i­cal mech­a­nism lying behind Agno­tol­ogy: “Doubt is our prod­uct since it is the best means of com­pet­ing with the ‘body of fact’ that exists in the mind of the gen­er­al pub­lic. It is also the means of estab­lish­ing a con­tro­ver­sy.” How does our brain learn? Often by rep­e­ti­tion and, to this end, I quote the so-called Hebb law: “What fires togeth­er wires togeth­er.” There­fore, if we receive cer­tain mes­sages, such as those which make us doubt the health risks of smok­ing, and we hear them over and over again, our brain can­not help but start “think­ing”: Maybe there is some truth to it. The same tech­nique is now used with regard to cli­mate sci­ence, and we also observed this phe­nom­e­non when the COVID-19 pan­dem­ic started. 

If we view Big Data and algo­rithms as aids and accom­pa­ni­ments of peo­ple hav­ing their own abil­i­ties, and, if we under­stand Big Data to be a tool used by peo­ple to deter­mine fre­quen­cies and cor­re­la­tions which then leads peo­ple to dis­cov­er new mate­r­i­al sets, then it is con­ceiv­able that time peri­ods will be reduced, costs low­ered, and goals can be aimed for in a more direct and pro­duc­tive man­ner. But, even in this domain, it is peo­ple who deter­mine what the para­me­ters are and how to inter­pret them. What means do we have at our dis­pos­al to counter manip­u­la­tion, Agno­tol­ogy etc.?

Per­haps the most cru­cial aspect con­cern­ing Big Data and AI is the impor­tance of the data and its inter­pre­ta­tion. Why? Because no data set comes with­out clas­si­fi­ca­tion, and it is there­fore based on human cat­e­gori­sa­tion, clas­si­fi­ca­tion and judge­ment. With regard to law, this is impor­tant when we talk about val­ues and morals that form the basis of our deci­sion mak­ing. Every AI that is trained to make deci­sions about “right” and “wrong” can only do so because it is giv­en cer­tain para­me­ters that are human-made. For exam­ple, what we val­ue in life, and how much a life or dif­fer­ent lives are worth. By now, many peo­ple have heard about the lat­ter debate and the moral dilem­ma relat­ed to it when it comes to self-dri­ving cars. 

On a more basic lev­el of data inter­pre­ta­tion, we just need to take a look at recent exam­ples that show how biased every AI is or how it can reveal human bias­es respec­tive­ly. Recent­ly, The Econ­o­mist pub­lished an illus­tra­tive exam­ple of this “bias in, bias out” rela­tion­ship.1 In the pic­tures used as train­ing data, women and peo­ple of colour were not only under­rep­re­sent­ed but were also depict­ed as stereotypes.

Cru­cial­ly, we can use these rev­e­la­tions as impor­tant dis­cov­er­ies to become aware of our own bias­es, stereo­typ­i­cal depic­tions in our dai­ly lives and even­tu­al­ly learn to address them by devel­op­ing the next gen­er­a­tion of AI based on less biased data and deci­sions. Thus, in my opin­ion, the most impor­tant tool against detri­men­tal manip­u­la­tion and Agno­tol­ogy is an open and crit­i­cal soci­etal as well as polit­i­cal dis­cus­sion about our under­ly­ing moti­va­tion, val­ues and morals. In order to do so, we have to place more empha­sis on what is called crit­i­cal think­ing in edu­ca­tion­al insti­tu­tions as well as in the media. 

Dig­i­tal­i­sa­tion is lead­ing to fur­ther automa­tion and thus to a reduc­tion in human labour. In this way, we can observe more and more offers, appoint­ments, job appli­ca­tions etc all being made online. Every one of these online process­es works by using pre-filled box­es and forms on the screen, and all oper­ate with­out peo­ple. Thus, a sort of stan­dard­i­s­a­tion and nor­mal­i­sa­tion is tak­ing place so that, owing to Big Data and algo­rithms, eval­u­a­tions will result in a fast, more effi­cient and bet­ter man­ner. Isn’t this trend the oppo­site of the naivete, for­bear­ance and curios­i­ty you so strong­ly advo­cate for? How can inno­va­tion pos­si­bly work togeth­er with stan­dard­i­s­a­tion? How can we expect for­bear­ance when there is a soft­ware pro­gramme at the oth­er end or a robot or an avatar or a com­put­er? Is there a place for curios­i­ty and a spir­it of dis­cov­ery when infor­ma­tion is being sug­gest­ed which has been cus­tomised because of “tar­get­ed mar­ket­ing”? Is the machine going to become the new scape­goat? To put it dif­fer­ent­ly, is this just a fast track for our brains towards “trained help­less­ness”, a term often men­tioned by you?

I think what we real­ly have to learn is to con­sid­er tech­nol­o­gy in gen­er­al and Big Data, includ­ing automa­ti­sa­tion specif­i­cal­ly, as some­thing that can help us. The most impor­tant word here being “can”, of course. We need to stop pro­mot­ing and using tech­nol­o­gy against humans and human inter­ests. Instead, we should focus more on how clever tech­nol­o­gy – made by humans – can and should be applied to trig­ger those aspects I advo­cate as the main ingre­di­ents not only of a hap­py and healthy life, but also of human progress in gen­er­al. Only because we have this amaz­ing curios­i­ty and the abil­i­ty to imag­ine and pre­dict future out­comes were we able to fly to the moon, devel­op smart­phones and vac­ci­na­tions. Thus, the advances of the future great­ly depend on how we lead this dis­cus­sion at a soci­etal lev­el. We can and should not leave it to the big tech com­pa­nies to deter­mine how we are using our time and atten­tion – and how they use our data gen­er­at­ed by that time, sim­ply to become even more powerful.

In oth­er words, it is a soci­etal chal­lenge and oppor­tu­ni­ty at the same time to deter­mine how we use the ever more advanced tech­nolo­gies that we as humans have cre­at­ed. Nat­u­ral­ly – from my per­spec­tive as a neu­ro­sci­en­tist and cog­ni­tive psy­chol­o­gist – we have to look at the tra­di­tion­al­ly con­sid­ered “tech debates” from a psy­cho­log­i­cal and mind-based angle in order to avoid the cre­ation of mind-less behaviour. 

Adding to the more gen­er­al remarks I made above, I am con­vinced that Big Data can sup­port human deci­sion-mak­ing in every aspect or domain. We already have count­less exam­ples from the med­ical world, where Big Data is great­ly advanc­ing diag­no­sis, for exam­ple, in radi­ol­o­gy.2 It can help in the fight against world hunger, the cli­mate cri­sis and against injus­tice, start­ing by mak­ing human bias­es and stereo­types vis­i­ble, as men­tioned above.3 Basi­cal­ly, it can help humans to “do good” bet­ter. Why and how? Because AI is bet­ter than humans in detect­ing pat­terns in data and mak­ing pre­dic­tions. The weath­er fore­cast is a very ear­ly exam­ple of that use of tech­nol­o­gy and is now being employed to warn peo­ple of severe weath­er events, for exam­ple. To cut a pos­si­bly long sto­ry short, I want to end with a call to every­body read­ing this: When it comes to Big Data and its use(fulness), we – as a glob­al soci­ety – should focus more on the “what for” instead of the “against what”. We should ask our­selves what life we want to live, not what life we want to avoid.4 This mind­set – often called growth mind­set in psy­chol­o­gy – enables us to use Big Data and AI to stim­u­late curios­i­ty, solu­tion-ori­ent­ed think­ing and there­by human flourishing.

My favourite cita­tion in this con­text is:

 

“Prob­lem talk cre­ates prob­lems, solu­tion talk cre­ates solutions”.

Steve de Shazer

Prof. Urn­er, thank you for shar­ing your insights on the pos­si­bil­i­ties of con­struc­tive crit­i­cism and news cov­er­age in rela­tion to Big Data.

Thank you, Dr Cal­daro­la, and I look for­ward to read­ing your upcom­ing inter­views with rec­og­nized experts, delv­ing even deep­er into this fas­ci­nat­ing topic.


1 https://​www​.econ​o​mist​.com/​g​r​a​p​h​i​c​-​d​e​t​a​i​l​/​2​0​2​1​/​0​6​/​0​5​/​d​e​m​o​g​r​a​p​h​i​c​-​s​k​e​w​s​-​i​n​-​t​r​a​i​n​i​n​g​-​d​a​t​a​-​c​r​e​a​t​e​-​a​l​g​o​r​i​t​h​m​i​c​-​e​r​r​ors

2 Diese 4 Krankheit­en erken­nen Algo­rith­men heute schon bess­er als echte Ärzte | Per­spec­tive Dai­ly (per​spec​tive​-dai​ly​.de)

3 https://​per​spec​tive​-dai​ly​.de/​a​r​t​i​c​l​e​/​4​5​3​/​d​R​G​q​U​E7x

4 https://​per​spec​tive​-dai​ly​.de/​a​r​t​i​c​l​e​/​1​2​9​8​/​p​3​c​R​K​4j9

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Prof. Dr Maren Urner

Maren Urner is a professor for media psychology at the HMKW University of Applied Sciences for Media, Communication and Management in Cologne. The neuroscientist and cognitive psychologist has conducted research in Canada, the Netherlands and the U.K. Professor Urner is the co-founder of Perspective Daily, the first online magazine for constructive journalism free of advertising. Her books, Schluss mit dem täglichen Weltuntergang - Prof. Dr. Maren Urner | Droemer Knaur (English translation: “Leaving the eternal crisis mode behind”) June 2019 and Raus aus der ewigen Dauerkrise - Prof. Dr. Maren Urner | Droemer Knaur (English translation: “How to use the thinking of tomorrow to solve the problems of today”) May 2021 are both on the bestselling list of the SPIEGEL. Dr Urner is a columnist for the Frankfurter Rundschau, a popular guest on talk shows, and a much sought-after keynote speaker. Through her work, she seeks to build bridges and encourages us to be optimistic.

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

FOL­LOW ME