Big Data is a quantitative tool that measures and depicts correlations, comparisons and frequencies. Does it provide us with solutions, or does it merely serve to indicate where a solution might be found, owing to its evaluation. How can Big Data serve as a constructive solution, as well as aiding and supporting the human brain?
In her Duet interview with neuroscientist and best selling author Prof. Maren Urner, Dr Caldarola, author of the recent book Big Data and Law, talks with her on the possibilities of constructive criticism and news coverage in relation to Big Data.
Owing to the internet and social media, people are confronted with an ever-increasing quantity of information. This information is neither filtered, nor structured and processed, and anyone who has something to say- regardless of whether any research has been done or whether it is topical or not or whether it is even interesting or not- what that person says is promulgated with great speed and enjoys a large range of readers. As with so many things, there are positive (e.g. censoring is made more difficult) as well as negative (flood of information, fake news etc.) aspects to this development. Will Big Data turn into a tool to help process, structure and verify the sheer quantity of information at hand? If it does, what will this development look like?
Prof. Maren Urner: Never before in history have we experienced such a dramatic increase in information – or data. Even if we tried to “consume” information 24⁄7, we would never be finished. Therefore, we have to use filters that help us to decide what we consume. The media, for example, is one of the most important filters. Obviously, the filters we use are not always consciously applied but are a selection of a given set that is provided by certain structures and factors, ranging from the technological infrastructure to social and political values. However, at a higher level, these mechanisms are not conferred upon us by nature but rather are determined by us as humans. For this reason, the question whether Big Data, AI and technological progress will help us to sort, structure and verify the vast amount of new information that is generated on a daily basis depends on how we program and use them. Sometimes we have to remind ourselves, as well as politicians and other decision-makers, that each type of technology has been developed by humans and must, therefore, be based on certain values and goals. Given this understanding and to answer the question more directly: Big Data certainly has the potential to help us structure and understand the overwhelming amount of new information produced by humans on a daily basis. It is our choice to do so.
In some global court cases, there are innumerable files, reports and so on which are all relevant to the case. Even today, it is difficult for a solicitor or a team of solicitors to process this mountain of information. We can already tell that Big Data and algorithms will be helping to find and quickly present legal arguments and testimony from these immense piles of files and reports to counter other legal claims and evidence. The production and quantity of information was or rather is an instrument to prolong court cases, raise the costs and to blur the “real” core of information, to distract from this core by means of new aspects and focal points or to make the core unclear by means of the language or context chosen. Will this type of strategy have to be changed and, if so, what will this change look like?
Law is one of the most interesting fields when it comes to the application of Big Data and AI because its very essence is to find “the truth”. Long before any technology was first used in a court case, psychologists and other avid observers knew that human perception is not objective or – in other words – not always “the truth”. On the contrary, every human observation is always an interpretation based on physical input at a sensory level interacting with prior experiences. Witness testimonials are therefore never objective nor necessarily true. Clever lawyers have known and used the so-called cognitive biases the human brain comes with for a long time, e.g. by asking suggestive questions and presenting certain information while withholding other types. Now, we are at a point in time where the truth-finding process of law can be made more accurate by using technology in a clever way. Crucially, the most important word in the previous sentence is “clever”. Because whether Big Data, algorithms and programs helps lawyers and citizens to find the truth, will greatly depend on what we as humans and societies value and consider just. It lies in our hands and minds whether we let technology corrupt human judgement by spreading fake news, amplifying group thinking and incentivising the most exciting content or whether we use it in a way to make the flaws in human judgment more visible, thereby potentially avoiding injustice. In other words, we have a historical opportunity to advance law by using Big Data instead of letting it blur human judgement even further. In order to do so we need sound regulations and progressive thinking by the individuals in charge.
Even in science we can observe some new trends, such as the discipline of Agnotology, which is now being researched and taught at a few universities. Agnotology is the creation and maintenance of ignorance. In this way, you surely know of many examples where many expert opinions by researchers have done nothing but spread doubt and disharmony. You only have to think of the radiation emitted by mobile devices and its effect on humans or microwave ovens or Teflon coating etc. Big Data can determine both quantitative aspects (the largest, best, most frequent) and correlations. Will “scientific” statements become “true” statements the more they are tweeted, liked etc.?
Indeed, the field of Agnotology is both fascinating and disturbing at the same time. We could even consider it as ironic that we now have a scientific discipline that investigates the deliberate propagation of ignorance and doubt about science as such. But if we look behind the curtain and consider the very origin of Agnotology, it becomes clear that this is not about science, progress or truth-seeking but about money. Robert Proctor, the science historian from Stanford University who coined the term, first observed the wilful spread of confusion and deceit by the tobacco industry in 1979. A decade-old paper written by the Brown & Williamson tobacco company reveals the psychological mechanism lying behind Agnotology: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy.” How does our brain learn? Often by repetition and, to this end, I quote the so-called Hebb law: “What fires together wires together.” Therefore, if we receive certain messages, such as those which make us doubt the health risks of smoking, and we hear them over and over again, our brain cannot help but start “thinking”: Maybe there is some truth to it. The same technique is now used with regard to climate science, and we also observed this phenomenon when the COVID-19 pandemic started.
If we view Big Data and algorithms as aids and accompaniments of people having their own abilities, and, if we understand Big Data to be a tool used by people to determine frequencies and correlations which then leads people to discover new material sets, then it is conceivable that time periods will be reduced, costs lowered, and goals can be aimed for in a more direct and productive manner. But, even in this domain, it is people who determine what the parameters are and how to interpret them. What means do we have at our disposal to counter manipulation, Agnotology etc.?
Perhaps the most crucial aspect concerning Big Data and AI is the importance of the data and its interpretation. Why? Because no data set comes without classification, and it is therefore based on human categorisation, classification and judgement. With regard to law, this is important when we talk about values and morals that form the basis of our decision making. Every AI that is trained to make decisions about “right” and “wrong” can only do so because it is given certain parameters that are human-made. For example, what we value in life, and how much a life or different lives are worth. By now, many people have heard about the latter debate and the moral dilemma related to it when it comes to self-driving cars.
On a more basic level of data interpretation, we just need to take a look at recent examples that show how biased every AI is or how it can reveal human biases respectively. Recently, The Economist published an illustrative example of this “bias in, bias out” relationship.1 In the pictures used as training data, women and people of colour were not only underrepresented but were also depicted as stereotypes.
Crucially, we can use these revelations as important discoveries to become aware of our own biases, stereotypical depictions in our daily lives and eventually learn to address them by developing the next generation of AI based on less biased data and decisions. Thus, in my opinion, the most important tool against detrimental manipulation and Agnotology is an open and critical societal as well as political discussion about our underlying motivation, values and morals. In order to do so, we have to place more emphasis on what is called critical thinking in educational institutions as well as in the media.
Digitalisation is leading to further automation and thus to a reduction in human labour. In this way, we can observe more and more offers, appointments, job applications etc all being made online. Every one of these online processes works by using pre-filled boxes and forms on the screen, and all operate without people. Thus, a sort of standardisation and normalisation is taking place so that, owing to Big Data and algorithms, evaluations will result in a fast, more efficient and better manner. Isn’t this trend the opposite of the naivete, forbearance and curiosity you so strongly advocate for? How can innovation possibly work together with standardisation? How can we expect forbearance when there is a software programme at the other end or a robot or an avatar or a computer? Is there a place for curiosity and a spirit of discovery when information is being suggested which has been customised because of “targeted marketing”? Is the machine going to become the new scapegoat? To put it differently, is this just a fast track for our brains towards “trained helplessness”, a term often mentioned by you?
I think what we really have to learn is to consider technology in general and Big Data, including automatisation specifically, as something that can help us. The most important word here being “can”, of course. We need to stop promoting and using technology against humans and human interests. Instead, we should focus more on how clever technology – made by humans – can and should be applied to trigger those aspects I advocate as the main ingredients not only of a happy and healthy life, but also of human progress in general. Only because we have this amazing curiosity and the ability to imagine and predict future outcomes were we able to fly to the moon, develop smartphones and vaccinations. Thus, the advances of the future greatly depend on how we lead this discussion at a societal level. We can and should not leave it to the big tech companies to determine how we are using our time and attention – and how they use our data generated by that time, simply to become even more powerful.
In other words, it is a societal challenge and opportunity at the same time to determine how we use the ever more advanced technologies that we as humans have created. Naturally – from my perspective as a neuroscientist and cognitive psychologist – we have to look at the traditionally considered “tech debates” from a psychological and mind-based angle in order to avoid the creation of mind-less behaviour.
You stand for constructive criticism, stimulation of managers and co-workers, for innovation and for new solutions. What are the solutions in a digital age? How can Big Data help and where will it become a hindrance? What roles do the media and journalism have in a digital age? How can discussions and journalism aid us to finally analyse these new trends and to help people to finally shape this development themselves and, at the same time, to be fully aware of this process?
Adding to the more general remarks I made above, I am convinced that Big Data can support human decision-making in every aspect or domain. We already have countless examples from the medical world, where Big Data is greatly advancing diagnosis, for example, in radiology.2 It can help in the fight against world hunger, the climate crisis and against injustice, starting by making human biases and stereotypes visible, as mentioned above.3 Basically, it can help humans to “do good” better. Why and how? Because AI is better than humans in detecting patterns in data and making predictions. The weather forecast is a very early example of that use of technology and is now being employed to warn people of severe weather events, for example. To cut a possibly long story short, I want to end with a call to everybody reading this: When it comes to Big Data and its use(fulness), we – as a global society – should focus more on the “what for” instead of the “against what”. We should ask ourselves what life we want to live, not what life we want to avoid.4 This mindset – often called growth mindset in psychology – enables us to use Big Data and AI to stimulate curiosity, solution-oriented thinking and thereby human flourishing.
My favourite citation in this context is:
“Problem talk creates problems, solution talk creates solutions”.
Steve de Shazer
Prof. Urner, thank you for sharing your insights on the possibilities of constructive criticism and news coverage in relation to Big Data.
Thank you, Dr Caldarola, and I look forward to reading your upcoming interviews with recognized experts, delving even deeper into this fascinating topic.
3 https://perspective-daily.de/article/453/dRGqUE7x
4 https://perspective-daily.de/article/1298/p3cRK4j9