The digital revolution is accelerating automated processes and will most likely dampen the majority of the driving forces raising the GDP, namely by reducing the number of working employees, decreasing the demand for skills and shrinking the availability of capital stock. Is the GDP still the right parameter to assess the performance or the economic and social well-being of a country?
In her latest Duet Interview, Dr Caldarola, author of Big Data and Law, has asked Prof. Marc Bertonèche, Economics Professor at Universities such as INSEAD, Harvard, Oxford, Bordeaux, to discuss the impact of Big Data on economies.
Gross domestic product (GDP) is the standard measure of the value added or created through the production of goods and services in a country during a certain period. As such, it also measures the income earned from that production, as well as the total amount spent on final goods and services minus the imports. Is innovation a performance indicator and is Big Data an innovation– perhaps even a disruptive innovation? Given the nature of Big Data, can it, therefore, bolster the GDP?
Prof. Dr. Marc Bertonèche: Big Data, as such, does not raise the Gross Domestic Product. It does, however, create conditions which can lead to a significant increase in the GDP. We all know that the GDP depends upon personal consumption, business investment, government spending and net exports. In the context of digitalisation, the driving forces, fed by the huge development of Big Data, that will raise the GDP and significantly impact economies include:
- New goods and services are being produced as a result of more accurate marketing. Companies are anticipating what customers really want and will do so even more adeptly in the future. They will, therefore, be able to generate more tailor-made and sophisticated products and services which will lead to new and increased revenue streams.
- Business processes can now be optimised and organisations better managed. More precisely, Big Data will allow for demand to be forecast more accurately, and thus, boost productivity, optimise pricing, improve and strengthen supply chains, and reduce employee turnover among other processes.
- Innovation will be accelerated through shorter research and shorter development cycles (the case of the vaccines against COVID-19 is a perfect example). Innovation, in particular disruptive innovation, will be the driving force behind raising the GDP. Our traditional managerial tools will have to be redefined and modernised to avoid the widely used R O I (« Return On Investment ») becoming « Restraint On Innovation »…!
Because data is easier to collect, transmit, store and analyse, it is, and will increasingly become, the crucial engine of growth of the GDP.
We have just invoked a key word by mentioning analyse. It is important to emphasise that data is a quite useless commodity if there are not efficient ways to extract meaningful information from it.
Key strategic skills will obviously be critical to turn data into action (as in Artificial Intelligence, Machine Learning, Internet of Things…). More and more sophisticated analytic approaches are needed to make the most of Big Data. I can’t stress enough that data on its own is simply raw materials which can be of little value if not correctly analysed and manufactured.
A recent report by the McKinsey Global Institute estimates that Big Data could generate an additional $ 3 trillion ($ 3,000 billion !!) in value every year (of this, 1.3 trillion would benefit the US). Even if all the benefits do not directly affect the GDP, as measured today, their impact on economic growth will remain more than significant.
An MIT study by Erik Brynjolfson found that companies adopting data-driven decision-making processes achieved a 5% to 6% higher productivity than their peers.
Omidyar Network has just released a study concluding that the use of Big Data for government policies could boost annual income in the G20 countries by something between 700 and 950 billion US dollars.
And I could easily add more to the list of numerous studies illustrating the positive and strong impact of Big Data on the growth of economies and the level of their GDP- and this is only the beginning. The world is generating data in ever larger quantities. The data avalanche is dramatically increasing the huge variety of data collected and the velocity at which data is recorded. IFL Science, in an article entitled « How much data does the world generate every minute? » estimates that 90% of the data available in the world today was generated in the last 2 years !! ‑and every 2 years, the amount of data will be doubling!! That’s just one more reason why I can’t overemphasise the importance of transforming these data into really useful and efficient information through careful and appropriate steps of analysis. Graduate academic programs should include these methods and techniques in their curriculum and organisations should develop training courses for their employees.
How can Big Data improve the way we measure the overall performance (success or failure) of the economy? Will it give us the opportunity to develop new indicators?
Criticism is growing among economists on the usefulness and relevancy of Gross Domestic Product (GDP) as a measure of economic and social well-being. They emphasise the need to develop new ways of evaluating progress using a broader and much more relevant measurement of performance. Joseph Stiglitz, the 2001 Nobel Prize in Economics recipient, has been one of the strongest voices to argue in that direction and has led a commission tasked with defining new approaches to measure economic performance and social progress. « It is clear, he wrote, that GDP is not an adequate summary statistic of how well we are doing.»
The current pandemic has clearly demonstrated that greater wealth and higher income per capita does not necessarily translate into better individual or societal well-being. The United States of America, despite having the highest GDP and one of the top income per capita in the world, have been greatly affected by the pandemic and have demonstrated dramatic inequalities in terms of healthcare access. Life expectancy in the US is lower than that of many other countries which have comparable GDP and income per capita.
Why is that issue so crucial today? Because, as Stiglitz argues, « what we measure affects what we do. If we measure the wrong thing, we will do the wrong thing. » How can we develop a better performance measurement system for any economy? We need to track economic growth, of course, but also social, political, environmental, and societal progress. A single summary statistic, whatever it is, will never be able to integrate and reflect the complexity of a society. Emphasising a single indicator is one of the reasons why the GDP has been a very poor metric of performance and has been facing growing criticism. What we need is a « dashboard » approach and the data available today and to-morrow will make it possible to build it. This set of holistic parameters should obviously include economic data, but also social indicators such as health, education, equality, security…, together with governance issues, such as human rights, progress made toward a democratic process and environmental targets achieved, such as emission reduction, biodiversity etc….
My opinion is:
Prof. Dr. Marc Bertonèche
Big Data will give us a chance to ensure that the widely used parameter Gross Domestic Product (GDP) does not degenerate into a « Genuinely Dated Parameter » and that the traditional Return On Investment (ROI) does not turn into « Restraint On Innovation »…!
What holds true at the macroeconomic level is of course equally valid at the company level. Organisations need to develop metrics integrating the three pillars of performance: the economic pillar, the social one, as well as the environmental and governance one. The ESG approach has been overwhelmingly accepted, and even required, in more and more countries. Investments integrating this triple assessment of performance have experienced an incredible growth and according to a study published by the US Business Roundtable, reached about $ 30,000 billion ($ 30 trillions) in 2019‑, a growth of nearly 70% since 2014 and a tenfold since 2004!
What do you think are and will be the economic and social consequences of technological progress, automation and digitalisation on jobs?
This is a very difficult question, and nobody really knows the answer to it. There are two very divergent scenarios which can be considered: a very pessimistic scenario and an optimistic one.
Looking first at the former, the pessimistic variation has been described and analysed in various studies. A study realised in 2013 by Carl B. Frey and Michaël Osborne from the University of Oxford, estimated the proportion of jobs heavily threatened by technological progress and automation in the next two decades at 47%, including accountants, auditors, salespersons, real estate agents, lawyers, and… economists. For non-qualified workers, a study by MIT ( Eric Brynjolfsson and Andrew McAfee) forecasts critical unemployment in the next decades because of the huge development of robots, « the immigrants of the future » and the exponential increase of what David Graeber, from the London School of Economics, calls « the bullshit jobs », which describes ways to keep people artificially busy for no economic reason other than fighting the inability of the economic system to generate enough jobs to guarantee some level of social stability.
In the Finance sector, the impact would be, in this pessimistic scenario, disastrous. Lee Coulter, CEO of Transform AI and a recognized expert in automation and artificial intelligence, said, in a conference sponsored by CFO in New York City in November 2019 « 70% of what is done to-day in a Finance department can be automised and a huge number of jobs will disappear ».
What is fairly certain is that retraining will become an absolute necessity should this state of affairs come to pass. IBD predicts that more than 120 million workers will need to be retrained in the next 3 years at a global level. What is also obvious is that this scenario will require the implementation of a Minimum Social Income for everybody in the economy.
Turning now to the optimistic scenario, this view relies on the basic assumption that, although automation and technology will destroy many current jobs, new jobs will arise to replace traditional activities. The argument draws on past experience having shown that the development of computers and internet had led to the destruction of jobs but has created many more new activities. By taking over all the traditionally simple, repetitive and boring tasks, automation will allow people to access more attractive and challenging jobs, a shift which will allow us to develop new skills in the workplaces in terms of creativity, critical thinking, interactions with others, to name a few. Big Data will undoubtedly create massive requirements for new skills. The McKinsey Global Institute predicts that, by 2024, there will be a shortage of approximately 250,000 data scientists- in the US alone.
I do not want to go into further detail concerning each scenario, but the debate is on and nobody is able to fully assess the impact of new technologies and Big Data on the job markets. Probably, as often is the case for these kinds of issue: « in medio stat virtus », with reality being a combination of extreme theories. What is obvious, however, as you mentioned earlier, will be a crucial need for retraining – and we would all do well to prepare ourselves at the economic level for this huge challenge.
In your opinion, what are the main challenges generated by Big Data?
There are several challenges and concerns which should be addressed with great attention. The first one which is on everybody’s mind is privacy. Because data is collected everywhere, from personal computers and smartphones to sensors in homes, they might include very sensitive information. Even if the data sets are carefully constructed to anonymise individuals, as Matthew Harding and Jonathan Hersh remind us in their note « Big Data in Economics », « information that is de-identified may be easily ex-post identified using machine learning tools ». Several studies, for example, have shown how easy it is to identify almost 100% of individuals in a database containing supposedly pseudonymised credit card transactions. It is interesting to remember that Big Data relies on free data and on a total lack of consent from the sources of these personal data for its collection and its analysis, a fact which contradicts the founding principles of our capitalist systems.
The second challenge is security. Protecting the data which have become a « critical business asset », according to Bernard Maar in his excellent book on « Tech Trends in Practice (Wiley 2020), is vital for any organisation- and will become even more so in the future. With the development of the IoT (Internet of Things), the threat of attacks by hackers will increase tremendously, as many connected devices are totally unsecured. The need for a solid and efficient data security policy is essential.
Big Data may suffer- and this is the third challenge- from a heavy selection bias depending on how and by whom data is being generated. As Matthew Harding and Jonathan Hersh note in the above-mentioned study, « Not everyone has the same propensity to use digital devices, or any app or website, resulting in possible biases, particularly when making generalisations about subgroups which happen to be represented in the data ». Appropriate methods should be used to minimise, or even, if possible, eliminate these biases (social bias, generation bias, racial bias etc.). Moreover, many valuable sources of Big Data may be controlled by private organisations which could impose limits on user freedom, resulting in publication biases.
Finally, Big Data is costly to gather and collect, to store, to prepare and clean and to analyse through analytics models and requires heavy investments in technology and human skills. Are all companies, and especially small businesses, able to have access to Big Data? Can all companies afford the high budget, the complex infrastructure and the extensive know-how needed to benefit from Big Data?
These are valid questions which must be considered, even if various experts, including Bernard Maar, in his book mentioned earlier, argue that « thanks to augmented analytics and Big-Data-as-a-service (BDaas)… Big Data will be accessible to anybody, « even small businesses, without the need for expensive infrastructure investments and overcoming the massive skills gap in Big Data ».
These challenges are enormous and call for improving data literacy across the board through knowledge- sharing and continuous training and education. They also stress the urgent need for all organisation to create a data strategy- the objective of which is « to focus on finding the exact, specific pieces of data that will best benefit the organization » (B. Maar).
A very ambitious task indeed ‑but absolutely necessary if we are to enjoy the huge benefits of Big Data.
Prof. Dr Bertonèche, thank you for sharing your thoughts on the impact of Big Data on economies.
Thank you, Dr Caldarola, and I look forward to reading your upcoming interviews with recognized experts, delving even deeper into this fascinating topic.