The role of Big Data in executive decision-making: Data-led decisions versus decisions made using instinct and experience? Can there be too much evidence in an evidence-based decision-making process?
In a continuation of her Duet interview series, Dr Caldarola, author of Big Data and Law, chats with Prof. Roger Hallowell, an acknowledged expert on leadership, regarding the practice of decision-making today ‑and tomorrow.
Good leaders base their decisions on hard facts to keep their decision-making comprehensive and objective. A primary goal has often been to have factual support for the decision which has been taken so as not to be perceived as being impulsive, irrational, partial or biased. Is Big Data an effective tool for improving this type of decision-making process?
Prof. Hallowell: In general, yes. Good decision-making should be driven by good data. However, a phrase that is frequently repeated by the people in the IT world is: “garbage in, garbage out”. That is why we need to consider the question – especially in these early days of Big Data, AI and the like – What is the quality of the data we have? I think history has taught us that we need to be sceptical and hesitant. In this regard, let me give you an example: In 2003, the US invaded Iraq largely because we believed that weapons of mass destruction were being hidden there. Now, why did we believe that? To no small degree, it was because extremely credible people – and I am now referring to Colin Powell, who was a person of the highest integrity – assured us that the best data available indicated that there were weapons of mass destruction. Needless to say, we know now that he was wrong. So even when you have the best data available and you make a decision based on that information, that data can frankly be wrong. And certainly, after the US and NATO experience in Iraq, it is hard to deny it was not a mistake to have gone in in the first place. And certainly, from the view of the Iraqi people, so many Iraqi people have died, so many billions of dollars have been spent and for what? Is the situation any better today than it was in 2003? No, realistically and honestly no. In fact, for many people, it is worse – people are on the move. So, I think the question with regard to using Big Data in decision-making in big organisations is: To what degree are we confident that we are getting a definitive answer? But we must keep in mind that we need to be sceptical. Of course, it is difficult to say that we shouldn’t make decisions based on our gut feelings but, frankly, we know there is good information out there which indicates that data-driven decisions are usually and almost always the best way to go. But again “garbage in, garbage out”. It really depends on the quality of the data.
How reliable can an evidence-based decision process be in view of the flood of both fake and true information – and especially the increasing use of agnotology?
For all of your readers who do not know what agnotology is: Agnotology is the study of culturally conditioned ignorance or doubt, typically to sell a product or win favour, particularly through the publication of inaccurate or misleading scientific data. More generally, the term also highlights the condition where more knowledge of a subject leaves one more uncertain than before. Active causes of culturally induced ignorance can include the influence of the media, corporations, and governmental agencies, through secrecy and suppression of information, document destruction, and selective memory. A classic example is climate denial, where oil companies paid teams of scientists to downplay the effects of climate change. Passive causes include structural information bubbles, including those created by segregation along racial and class lines, which lead to differential access to information.
Agnotology also focuses on how and why diverse forms of knowledge do not “come to be,” or are ignored or delayed. For example, knowledge about plate tectonics was censored and delayed for at least a decade because some evidence remained classified military information related to undersea warfare. Other examples are the numerous scientific research studies with regard to microwaves, Teflon, plastic and the like which manufacture doubt about health effects when using these products.
And here again we receive that information – whether true or fake, whether well researched or not, whether manipulated or not – from scientists who are regarded as being highly credible people.
Agnotology is extremely topical because I do not exactly remember when it was but, at some point, President Joe Biden said that Facebook was killing people for allowing so much false information about the vaccines against Covid 19. I hate to oversimplify, but it really goes back to the fundamental principle: Garbage in, garbage out.
My (second) favourite quote is:
a term coined by George Fuechsel
From GIGO to QIQO (Quality in Quality out)
If one has false information entering a system which is not capable of differentiating between true or false information, then the decision taken is only as good as the information on which the decision had been based.
If one has false information entering a system which is not capable of differentiating between true or false information, then the decision taken is only as good as the information on which the decision had been based.
There was a book that came out recently about former President Donald Trump saying that he was of the belief that if you repeated something often enough- even if it was false- it became true.
There is also a trend to reduce the complexity of information by shifting the discussion to the terrain of opinions by clicking on likes and dislikes.
There is also the phenomenon of gaming the system by figuring out how an algorithm works, and one comes up with a way of influencing those algorithms disproportionally. I remember back in the early 2000s when the notion of customer influence about a product became prevalent, I started thinking that this was not objective and was also problematic.
Big Data is certainly not a panacea for leaders for forming their opinion and making their decisions. To be sure, US companies are marketing their companies in utterly absurd ways by pretending they are an “AI leader” and are “providing enterprise-wide AI”. But what does this mean? Most probably nothing. Perhaps it simply means that the company in question is providing AI through a cloud that can be used by others. But is that really an actual value proposition? Again, most likely not.
It is Important to consider the question of how clear the message is that data analysis is giving us. There is a saying by Ronald H. Coase. “If you torture data long enough, it will confess to almost anything”. Furthermore, Lord Courtney stated already back in 1895: “There are lies, there are dammed lies and there are statistics”. Big Data is not new. The only thing that is new is that data analysis is being done more effectively than before.
What informs people when they make decisions and form judgements?
Let us begin by considering the way we traditionally got our news in the past and the way so many people are getting their news today. A key difference is that in the past our news was curated by a trusted source whether this was the NY Times, the Wall Street Journal, the Financial Times and so on. But there was somebody out there saying: This is worth printing, and this is not worth printing. This is real and this is not real.
The same can be applied to traditional encyclopaedias versus today’s Wikipedia. Although Wikipedia does make an effort to ban misinformation, the question remains: How long does it take Wikipedia or any other digital news platform to ban misinformation? And do they actually get around to banning it? Let me give you an example from Wikipedia: I have a relative, who was the Secretary of Commerce under President Eisenhower. There is a Wikipedia Page on him, and a relative of mine went on Wikipedia and wrote all kinds of things that were unsubstantiated and absolutely wacko. Yet nobody bothered to correct these entries, and nobody identified this contribution as being plain wrong and so this page has remained out there in this form.
And what is exacerbating this problem? At the moment, when the left wing is being extremist and tries to impose its radical views on everyone, the right wing simply shuts down and doesn’t listen. And, conversely, when the right wing is being extreme and tries to impose its radical views on everyone, the left simply shuts down and doesn’t listen. So, what is certainly happening in the US today and in some EU countries as well as, I imagine, in some other countries around the world, is that we are only hearing from the extreme side and not hearing from moderates, who, quite frankly, are the majority. This moderate voice is rendered silent – which is a huge problem for society.
Social media platforms are earning an enormous amount of money. They are incredibly profitable. Shouldn’t they be held accountable for the misinformation they are facilitating? Shouldn’t they not use some of their earnings to correct this situation? Facebook is referred to as having “infinite scalability” in its purest form. Once you create the platform you can have as many people as you want using it. The question is: Who should be held liable for the distribution of fake content and who should be liable for creating fake content. I think that the platform distributing potentially fake content is responsible for policing itself and curating the content they are distributing – they owe that to their customers. After all, traditional newspapers – like the Financial Times – still carry out these important tasks.
The issue here is that this is a variable cost that dramatically reduces the profitability of platforms like Facebook and that is why such platforms are doing everything to resist that kind of costly, additional work. I think we owe policing to society; if platforms like Facebook, Instagram, Snapchat… do not want to police themselves then I think they should incur the costs for them being monitored not only with regard to facts/information which they propagate- but also concerning the opinions they are creating. Organisations that have – at least in part – as their goal to influence people in a way that misguides them should not be allowed to operate. Period. And if these companies are in foreign countries – such as, for example, Russian sources of misinformation – they should be blocked.
The challenge companies are facing is that AI – as of today – is not good enough to police fake news or unethical behaviour. There are companies out there that have infinite business scalability – meaning that they accumulate millions of new customers by just adding a few more servers for an incredibly incremental variable cost. Nevertheless, currently there is a debate going on concerning legality and ethics when data is processed. Companies who do not pledge to be ethical with personal data will certainly starve. The challenge that these companies are, therefore, facing today is the decision whether they are disciplining themselves by hiring an army of moderators to curate their data and consequently relinquish profitability or whether they wait for better AI or the breakup of their company (cartel) into different pieces, as we know from the history of Standard Oil.
But is there a difference between fact and opinion? Can opinions be banned and curated? And isn’t the internet already full of opinions in the form of likes and dislikes?
Of course, there is a difference between facts that one can verify and opinions that are subjective. Let us take the example of the attack on the Capitol on January 6, 2021. Some may be of the opinion that it was the right thing to do while others might state that the invasion was terrible and violent because three people were killed. Perhaps we need to make a distinction between opinion and fact, as we do in traditional newspapers where we know that we tend to find facts on the front page and opinions in the editorial section.
The same should apply to causality. Some people are convinced that people died due to the COVID-19 vaccination- and indeed there were people who died shortly after being vaccinated. Nevertheless, there were no cases where the vaccination itself caused death.
I think we as a society need to educate people about what fact, opinions and causality are. It is true that partial, incomplete information – by cutting off an essential element- can spread and be sold on social media in seconds ‑especially if the content is lurid. It is also true that spreading misinformation is difficult to stop because it takes time for research to determine whether a statement is complete and true.
This is so topical because there was a quote from a new book about Facebook entitled An ugly truth, Inside Facebook’s Battle for Domination. The book’s name came from a posting by a Faceook executive who had said in an email that Facebook was all about connecting people and sometimes those connections resulted in terrorists’ activities, or those connections resulted in people being killed but the desirability of connecting people was so great that- at Facebook at least- they preferred to err on the side of enhancing connection rather than insuring truth. This revelation is a smoking gun.
What do you think is needed in these times of social media, digital leadership and digital journalism?
We need to have a new social contract in which we say: If you are making money as a result of connecting people and those people are looking at the information your platform is providing, then that information must be curated. When you are operating in a world where a US President can speak of “alternative facts”, we have to admit that this is simply wrong- It is a lie. Dr Antony Fauci is the Head of the National Institute for Allergy and Infection Diseases, and he is the top White House advisor on the pandemic. Dr Fauci was in a hearing in the Senate, and a prominent Republican Senator said all the evidence pointed to the Corona virus having been released from a laboratory in Wuhan. To this Fauci actually replied: “Senator, you are lying; what you are saying is not correct.” This is an example of what we should be doing in governance. The majority of the evidence concerning the origin of the virus point to it having been transmitted from an animal to a person. There is enough evidence to support the idea of the virus having come from that lab in Wuhan that needs to be investigated and I am the first to openly admit that. However, the reality is that the preponderance of the evidence is that it came from an animal. We need to get people to understand those distinctions.
I like the idea of leaders having to self-regulate themselves. But the industry of social media has proven that they are not doing it. This in turn means that the damage they are causing is bigger than the benefit they are providing- so you need to regulate that industry.
On the one hand, privacy is certainly not meant to cost anything yet, on the other hand, it is the nature and indeed the duty of business leaders to earn money. Milton Friedman said once: “The only objective that business leaders have is to take care of their shareholders”. Nevertheless. there is also the other side of the spectrum. The US Chamber of Industry and Commerce has recently stated that companies should take care of a variety of shareholders ‑including customers and employees. Goldman Sachs just reported that companies not having a wide enough view of their stakeholders will be divested. That is the new leadership challenge.
So, allow me to reiterate at this point that Big Data is not a panacea and it is not even new. That having been said, we can certainly enhance its use as well as our ability to analyse it by employing powerful supplementary forms of technology, such as AI. The quality of data matters and the result must be a clear message. Finally, leaders should curate their data and information (with or without AI) and need to be engaged in a conversation about Big Data and privacy with their constituency.
Prof. Hallowell, thank you for sharing your insights on the practice of decision-making today and tomorrow.
Thank you, Dr Caldarola, and I look forward to reading your upcoming interviews with recognised experts, delving even deeper into this fascinating topic.