Big Data in Exec­u­tive Deci­sion-Mak­ing: Friend or Foe?

Prof. Roger Hal­low­ell, Ph.D.

The role of Big Data in exec­u­tive deci­sion-mak­ing: Data-led deci­sions ver­sus deci­sions made using instinct and expe­ri­ence? Can there be too much evi­dence in an evi­dence-based deci­sion-mak­ing process?

In a con­tin­u­a­tion of her Duet inter­view series, Dr Cal­daro­la, author of Big Data and Law, chats with Prof. Roger Hal­low­ell, an acknowl­edged expert on lead­er­ship, regard­ing the prac­tice of deci­sion-mak­ing today ‑and tomorrow.

Good lead­ers base their deci­sions on hard facts to keep their deci­sion-mak­ing com­pre­hen­sive and objec­tive. A pri­ma­ry goal has often been to have fac­tu­al sup­port for the deci­sion which has been tak­en so as not to be per­ceived as being impul­sive, irra­tional, par­tial or biased. Is Big Data an effec­tive tool for improv­ing this type of deci­sion-mak­ing process?

Prof. Hal­low­ell: In gen­er­al, yes. Good deci­sion-mak­ing should be dri­ven by good data. How­ev­er, a phrase that is fre­quent­ly repeat­ed by the peo­ple in the IT world is: “garbage in, garbage out”. That is why we need to con­sid­er the ques­tion – espe­cial­ly in these ear­ly days of Big Data, AI and the like – What is the qual­i­ty of the data we have? I think his­to­ry has taught us that we need to be scep­ti­cal and hes­i­tant. In this regard, let me give you an exam­ple: In 2003, the US invad­ed Iraq large­ly because we believed that weapons of mass destruc­tion were being hid­den there. Now, why did we believe that? To no small degree, it was because extreme­ly cred­i­ble peo­ple – and I am now refer­ring to Col­in Pow­ell, who was a per­son of the high­est integri­ty – assured us that the best data avail­able indi­cat­ed that there were weapons of mass destruc­tion. Need­less to say, we know now that he was wrong. So even when you have the best data avail­able and you make a deci­sion based on that infor­ma­tion, that data can frankly be wrong. And cer­tain­ly, after the US and NATO expe­ri­ence in Iraq, it is hard to deny it was not a mis­take to have gone in in the first place. And cer­tain­ly, from the view of the Iraqi peo­ple, so many Iraqi peo­ple have died, so many bil­lions of dol­lars have been spent and for what?  Is the sit­u­a­tion any bet­ter today than it was in 2003? No, real­is­ti­cal­ly and hon­est­ly no. In fact, for many peo­ple, it is worse – peo­ple are on the move. So, I think the ques­tion with regard to using Big Data in deci­sion-mak­ing in big organ­i­sa­tions is: To what degree are we con­fi­dent that we are get­ting a defin­i­tive answer? But we must keep in mind that we need to be scep­ti­cal. Of course, it is dif­fi­cult to say that we shouldn’t make deci­sions based on our gut feel­ings but, frankly, we know there is good infor­ma­tion out there which indi­cates that data-dri­ven deci­sions are usu­al­ly and almost always the best way to go. But again “garbage in, garbage out”.  It real­ly depends on the qual­i­ty of the data.

How reli­able can an evi­dence-based deci­sion process be in view of the flood of both fake and true infor­ma­tion – and espe­cial­ly the increas­ing use of agnotology?

For all of your read­ers who do not know what agno­tol­ogy is: Agno­tol­ogy is the study of cul­tur­al­ly con­di­tioned igno­rance or doubt, typ­i­cal­ly to sell a prod­uct or win favour, par­tic­u­lar­ly through the pub­li­ca­tion of inac­cu­rate or mis­lead­ing sci­en­tif­ic data. More gen­er­al­ly, the term also high­lights the con­di­tion where more knowl­edge of a sub­ject leaves one more uncer­tain than before. Active caus­es of cul­tur­al­ly induced igno­rance can include the influ­ence of the media, cor­po­ra­tions, and gov­ern­men­tal agen­cies, through secre­cy and sup­pres­sion of infor­ma­tion, doc­u­ment destruc­tion, and selec­tive mem­o­ry. A clas­sic exam­ple is cli­mate denial, where oil com­pa­nies paid teams of sci­en­tists to down­play the effects of cli­mate change. Pas­sive caus­es include struc­tur­al infor­ma­tion bub­bles, includ­ing those cre­at­ed by seg­re­ga­tion along racial and class lines, which lead to dif­fer­en­tial access to information.

Agno­tol­ogy also focus­es on how and why diverse forms of knowl­edge do not “come to be,” or are ignored or delayed. For exam­ple, knowl­edge about plate tec­ton­ics was cen­sored and delayed for at least a decade because some evi­dence remained clas­si­fied mil­i­tary infor­ma­tion relat­ed to under­sea war­fare. Oth­er exam­ples are the numer­ous sci­en­tif­ic research stud­ies with regard to microwaves, Teflon, plas­tic and the like which man­u­fac­ture doubt about health effects when using these products. 

And here again we receive that infor­ma­tion – whether true or fake, whether well researched or not, whether manip­u­lat­ed or not – from sci­en­tists who are regard­ed as being high­ly cred­i­ble people.

Agno­tol­ogy is extreme­ly top­i­cal because I do not exact­ly remem­ber when it was but, at some point, Pres­i­dent Joe Biden said that Face­book was killing peo­ple for allow­ing so much false infor­ma­tion about the vac­cines against Covid 19. I hate to over­sim­pli­fy, but it real­ly goes back to the fun­da­men­tal prin­ci­ple: Garbage in, garbage out.

My (sec­ond) favourite quote is:

From GIGO to QIQO (Qual­i­ty in Qual­i­ty out)

a term coined by George Fuechsel

If one has false infor­ma­tion enter­ing a sys­tem which is not capa­ble of dif­fer­en­ti­at­ing between true or false infor­ma­tion, then the deci­sion tak­en is only as good as the infor­ma­tion on which the deci­sion had been based.

If one has false infor­ma­tion enter­ing a sys­tem which is not capa­ble of dif­fer­en­ti­at­ing between true or false infor­ma­tion, then the deci­sion tak­en is only as good as the infor­ma­tion on which the deci­sion had been based.

There was a book that came out recent­ly about for­mer Pres­i­dent Don­ald Trump say­ing that he was of the belief that if you repeat­ed some­thing often enough- even if it was false- it became true.

There is also a trend to reduce the com­plex­i­ty of infor­ma­tion by shift­ing the dis­cus­sion to the ter­rain of opin­ions by click­ing on likes and dislikes.

There is also the phe­nom­e­non of gam­ing the sys­tem by fig­ur­ing out how an algo­rithm works, and one comes up with a way of influ­enc­ing those algo­rithms dis­pro­por­tion­al­ly. I remem­ber back in the ear­ly 2000s when the notion of cus­tomer influ­ence about a prod­uct became preva­lent, I start­ed think­ing that this was not objec­tive and was also problematic.

Big Data is cer­tain­ly not a panacea for lead­ers for form­ing their opin­ion and mak­ing their deci­sions. To be sure, US com­pa­nies are mar­ket­ing their com­pa­nies in utter­ly absurd ways by pre­tend­ing they are an “AI leader” and are “pro­vid­ing enter­prise-wide AI”. But what does this mean? Most prob­a­bly noth­ing. Per­haps it sim­ply means that the com­pa­ny in ques­tion is pro­vid­ing AI through a cloud that can be used by oth­ers. But is that real­ly an actu­al val­ue propo­si­tion?  Again, most like­ly not.

It is Impor­tant to con­sid­er the ques­tion of how clear the mes­sage is that data analy­sis is giv­ing us. There is a say­ing by Ronald H. Coase. “If you tor­ture data long enough, it will con­fess to almost any­thing”. Fur­ther­more, Lord Court­ney stat­ed already back in 1895: “There are lies, there are dammed lies and there are sta­tis­tics”. Big Data is not new. The only thing that is new is that data analy­sis is being done more effec­tive­ly than before.

What informs peo­ple when they make deci­sions and form judgements?

Let us begin by con­sid­er­ing the way we tra­di­tion­al­ly got our news in the past and the way so many peo­ple are get­ting their news today. A key dif­fer­ence is that in the past our news was curat­ed by a trust­ed source whether this was the NY Times, the Wall Street Jour­nal, the Finan­cial Times and so on. But there was some­body out there say­ing: This is worth print­ing, and this is not worth print­ing. This is real and this is not real.

The same can be applied to tra­di­tion­al ency­clopae­dias ver­sus today’s Wikipedia. Although Wikipedia does make an effort to ban mis­in­for­ma­tion, the ques­tion remains:  How long does it take Wikipedia or any oth­er dig­i­tal news plat­form to ban mis­in­for­ma­tion? And do they actu­al­ly get around to ban­ning it? Let me give you an exam­ple from Wikipedia: I have a rel­a­tive, who was the Sec­re­tary of Com­merce under Pres­i­dent Eisen­how­er. There is a Wikipedia Page on him, and a rel­a­tive of mine went on Wikipedia and wrote all kinds of things that were unsub­stan­ti­at­ed and absolute­ly wacko. Yet nobody both­ered to cor­rect these entries, and nobody iden­ti­fied this con­tri­bu­tion as being plain wrong and so this page has remained out there in this form.

And what is exac­er­bat­ing this prob­lem? At the moment, when the left wing is being extrem­ist and tries to impose its rad­i­cal views on every­one, the right wing sim­ply shuts down and doesn’t lis­ten.  And, con­verse­ly, when the right wing is being extreme and tries to impose its rad­i­cal views on every­one, the left sim­ply shuts down and doesn’t lis­ten. So, what is cer­tain­ly hap­pen­ing in the US today and in some EU coun­tries as well as, I imag­ine, in some oth­er coun­tries around the world, is that we are only hear­ing from the extreme side and not hear­ing from mod­er­ates, who, quite frankly, are the major­i­ty. This mod­er­ate voice is ren­dered silent – which is a huge prob­lem for society.

Social media plat­forms are earn­ing an enor­mous amount of mon­ey. They are incred­i­bly prof­itable. Shouldn’t they be held account­able for the mis­in­for­ma­tion they are facil­i­tat­ing? Shouldn’t they not use some of their earn­ings to cor­rect this sit­u­a­tion? Face­book is referred to as hav­ing “infi­nite scal­a­bil­i­ty” in its purest form. Once you cre­ate the plat­form you can have as many peo­ple as you want using it. The ques­tion is: Who should be held liable for the dis­tri­b­u­tion of fake con­tent and who should be liable for cre­at­ing fake con­tent. I think that the plat­form dis­trib­ut­ing poten­tial­ly fake con­tent is respon­si­ble for polic­ing itself and curat­ing the con­tent they are dis­trib­ut­ing – they owe that to their cus­tomers. After all, tra­di­tion­al news­pa­pers – like the Finan­cial Times – still car­ry out these impor­tant tasks.

The issue here is that this is a vari­able cost that dra­mat­i­cal­ly reduces the prof­itabil­i­ty of plat­forms like Face­book and that is why such plat­forms are doing every­thing to resist that kind of cost­ly, addi­tion­al work. I think we owe polic­ing to soci­ety; if plat­forms like Face­book, Insta­gram, Snapchat… do not want to police them­selves then I think they should incur the costs for them being mon­i­tored not only with regard to facts/information which they prop­a­gate- but also con­cern­ing the opin­ions they are cre­at­ing. Organ­i­sa­tions that have – at least in part – as their goal to influ­ence peo­ple in a way that mis­guides them should not be allowed to oper­ate. Peri­od. And if these com­pa­nies are in for­eign coun­tries – such as, for exam­ple, Russ­ian sources of mis­in­for­ma­tion – they should be blocked.

The chal­lenge com­pa­nies are fac­ing is that AI – as of today – is not good enough to police fake news or uneth­i­cal behav­iour. There are com­pa­nies out there that have infi­nite busi­ness scal­a­bil­i­ty – mean­ing that they accu­mu­late mil­lions of new cus­tomers by just adding a few more servers for an incred­i­bly incre­men­tal vari­able cost. Nev­er­the­less, cur­rent­ly there is a debate going on con­cern­ing legal­i­ty and ethics when data is processed. Com­pa­nies who do not pledge to be eth­i­cal with per­son­al data will cer­tain­ly starve. The chal­lenge that these com­pa­nies are, there­fore, fac­ing today is the deci­sion whether they are dis­ci­plin­ing them­selves by hir­ing an army of mod­er­a­tors to curate their data and con­se­quent­ly relin­quish prof­itabil­i­ty or whether they wait for bet­ter AI or the breakup of their com­pa­ny (car­tel) into dif­fer­ent pieces, as we know from the his­to­ry of Stan­dard Oil.

But is there a dif­fer­ence between fact and opin­ion? Can opin­ions be banned and curat­ed? And isn’t the inter­net already full of opin­ions in the form of likes and dislikes?

Of course, there is a dif­fer­ence between facts that one can ver­i­fy and opin­ions that are sub­jec­tive. Let us take the exam­ple of the attack on the Capi­tol on Jan­u­ary 6, 2021. Some may be of the opin­ion that it was the right thing to do while oth­ers might state that the inva­sion was ter­ri­ble and vio­lent because three peo­ple were killed. Per­haps we need to make a dis­tinc­tion between opin­ion and fact, as we do in tra­di­tion­al news­pa­pers where we know that we tend to find facts on the front page and opin­ions in the edi­to­r­i­al section.

The same should apply to causal­i­ty. Some peo­ple are con­vinced that peo­ple died due to the COVID-19 vac­ci­na­tion- and indeed there were peo­ple who died short­ly after being vac­ci­nat­ed. Nev­er­the­less, there were no cas­es where the vac­ci­na­tion itself caused death.

I think we as a soci­ety need to edu­cate peo­ple about what fact, opin­ions and causal­i­ty are. It is true that par­tial, incom­plete infor­ma­tion – by cut­ting off an essen­tial ele­ment- can spread and be sold on social media in sec­onds ‑espe­cial­ly if the con­tent is lurid. It is also true that spread­ing mis­in­for­ma­tion is dif­fi­cult to stop because it takes time for research to deter­mine whether a state­ment is com­plete and true.

This is so top­i­cal because there was a quote from a new book about Face­book enti­tled An ugly truth, Inside Face­book’s Bat­tle for Dom­i­na­tion. The book’s name came from a post­ing by a Faceook exec­u­tive who had said in an email that Face­book was all about con­nect­ing peo­ple and some­times those con­nec­tions result­ed in ter­ror­ists’ activ­i­ties, or those con­nec­tions result­ed in peo­ple being killed but the desir­abil­i­ty of con­nect­ing peo­ple was so great that- at Face­book at least- they pre­ferred to err on the side of enhanc­ing con­nec­tion rather than insur­ing truth. This rev­e­la­tion is a smok­ing gun.

What do you think is need­ed in these times of social media, dig­i­tal lead­er­ship and dig­i­tal journalism?

We need to have a new social con­tract in which we say: If you are mak­ing mon­ey as a result of con­nect­ing peo­ple and those peo­ple are look­ing at the infor­ma­tion your plat­form is pro­vid­ing, then that infor­ma­tion must be curat­ed. When you are oper­at­ing in a world where a US Pres­i­dent can speak of “alter­na­tive facts”, we have to admit that this is sim­ply wrong- It is a lie. Dr Antony Fau­ci is the Head of the Nation­al Insti­tute for Aller­gy and Infec­tion Dis­eases, and he is the top White House advi­sor on the pan­dem­ic. Dr Fau­ci was in a hear­ing in the Sen­ate, and a promi­nent Repub­li­can Sen­a­tor said all the evi­dence point­ed to the Coro­na virus hav­ing been released from a lab­o­ra­to­ry in Wuhan. To this Fau­ci actu­al­ly replied: “Sen­a­tor, you are lying; what you are say­ing is not cor­rect.” This is an exam­ple of what we should be doing in gov­er­nance. The major­i­ty of the evi­dence con­cern­ing the ori­gin of the virus point to it hav­ing been trans­mit­ted from an ani­mal to a per­son. There is enough evi­dence to sup­port the idea of the virus hav­ing come from that lab in Wuhan that needs to be inves­ti­gat­ed and I am the first to open­ly admit that. How­ev­er, the real­i­ty is that the pre­pon­der­ance of the evi­dence is that it came from an ani­mal. We need to get peo­ple to under­stand those distinctions.

I like the idea of lead­ers hav­ing to self-reg­u­late them­selves. But the indus­try of social media has proven that they are not doing it. This in turn means that the dam­age they are caus­ing is big­ger than the ben­e­fit they are pro­vid­ing- so you need to reg­u­late that industry.

On the one hand, pri­va­cy is cer­tain­ly not meant to cost any­thing yet, on the oth­er hand, it is the nature and indeed the duty of busi­ness lead­ers to earn mon­ey. Mil­ton Fried­man said once: “The only objec­tive that busi­ness lead­ers have is to take care of their share­hold­ers”. Nev­er­the­less. there is also the oth­er side of the spec­trum. The US Cham­ber of Indus­try and Com­merce has recent­ly stat­ed that com­pa­nies should take care of a vari­ety of share­hold­ers ‑includ­ing cus­tomers and employ­ees. Gold­man Sachs just report­ed that com­pa­nies not hav­ing a wide enough view of their stake­hold­ers will be divest­ed. That is the new lead­er­ship challenge.

So, allow me to reit­er­ate at this point that Big Data is not a panacea and it is not even new. That hav­ing been said, we can cer­tain­ly enhance its use as well as our abil­i­ty to analyse it by employ­ing pow­er­ful sup­ple­men­tary forms of tech­nol­o­gy, such as AI.  The qual­i­ty of data mat­ters and the result must be a clear mes­sage. Final­ly, lead­ers should curate their data and infor­ma­tion (with or with­out AI) and need to be engaged in a con­ver­sa­tion about Big Data and pri­va­cy with their constituency.

Prof. Hal­low­ell, thank you for shar­ing your insights on the prac­tice of deci­sion-mak­ing today and tomorrow.

Thank you, Dr Cal­daro­la, and I look for­ward to read­ing your upcom­ing inter­views with recog­nised experts, delv­ing even deep­er into this fas­ci­nat­ing topic.

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Prof. Roger Hallowell, Ph.D.

Roger Hallowell is Adjunct Professor of Strategy and Business Policy at HEC, Paris, and Managing Director of The Service, Strategy and Change Group LLC. His career has been dedicated to working with executives on issues of strategic concern to their organisations. He was previously a managing partner at the Centre for Executive Development and a professor at Harvard Business School, where Prof. Hallowell conducted research from 1991 through 2003. Roger has designed and delivered customised executive education programs throughout most parts of the world. He also facilitates executive meetings and speaks at conferences.
Roger’s academic work focuses on leadership of organisations wanting to increase the value delivered to customers, often through service. His projects are designed to help executives and senior managers enhance their leadership abilities, including their ability to design and implement change. He is an authority on strategic initiatives involving high-value service as well as concomitant cost reduction and quality improvement.
Roger’s career began as a banker on Wall Street and includes two senior management positions in industry. He has authored numerous papers and written more than 60 case studies on organisations in North America, Europe, and Asia, including three HBS best-sellers. Prof. Hallowell has also advised private equity firms on their investments in the service sector.
Roger has a bachelor’s degree from Harvard College (1984) and an MBA and Doctorate from Harvard Business School. He lives near Boston with his wife and two sons.

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.