Technology is coming closer and closer to the essential and personal affairs of human beings. Whether it be biotechnology, medicine or digital applications and apps that record and impact our daily interactions, they all influence humans as a collective as well as individuals when decisions are made. The call for ethics is growing louder and louder. The pandemic has led to increased ethical debates surrounding the big questions of if, and, if so, which ethical understanding will guide humanity, how ethics will affect future developments in the digital age and how ethical values can and should be respected.
In the latest of her Duet interviews, Dr Caldarola, author of Big Data and Law, and AI expert Leila Taghizadeh discuss the compatibility of ethics and AI.
To start off our conversation, can you first give our readers a short introduction to what AI (Artificial Intelligence) is and how it works?
Dr Leila Taghizadeh: Basically, it is the intelligence shown by a machine or any device built by humans. You might already consider all computers to be intelligent because some of them can perform tasks which are way beyond the abilities of even the most intelligent people; a simple example being the applications which can solve complex math equations. However, we are looking at a slightly different situation: In the world of “traditional” computers, a human creates the algorithm and the program while the computer only follows certain commands and executes exactly what it has been told to do! Imagine a gate where you could drop off your parcels to be processed. With a traditional computer, you would have to define different types of parcels and make them known to the machine. Otherwise, the computer wouldn’t know what to do when it received a box instead of an envelope, for example. So, in this hypothetical program, you define that a parcel can be an envelope or a box. Now, if you try to drop off a bag, the machine wouldn’t know what to do, and, depending on how it was programmed, either it would simply do nothing or it would give an error message. In the world of AI, however, a machine looks for similarities between a bag and a box and an envelope and will treat it as a parcel. A machine is capable of learning!
Is AI always driven by a human being – in other words, will there always be a person at the end of a development– in your article you refer to them as AI technologists – who can be traced back to the outcome of AI-driven technology and can, therefore, be held accountable? Or will AI also become an independent result of machine learning without the influence, impetus or guidance of a human being because machines/robots will be able to improve and advance themselves? If so, who is liable-especially if the current laws only attribute profits, loss and damages to human beings?
This is a hot topic in the domain of AI; the regulations guiding the development of AI still have to catch up with the development of AI. The nature of AI should enable machines to operate without human guidance or interference at some level- at least at some point in the future. However, humans will probably still be at the beginning of the chain: at that moment in time when the AI in question is being created. That is why we need to make sure that the relevant algorithms are without bias or discrimination. You cannot build an AI based on a discriminative algorithm and expect it to behave “fairly”. Now, here comes the problem: We, as humans, have lots of conscious and unconscious biases; we create rules in our society based on our biases and we rule society with our biases. So how can we expect AI not to have them? Although with AI, we have a bigger problem: The biases will be propagated much faster and in bigger dimensions! Can we change this? I still believe we can, but it will need to be a conscious decision and a collective effort.
I see another interesting angle to your question which concerns liability. When we created corporations and institutions and gave them power of attorney, how did we answer this question of liability? A regime, a government, a corporation has a certain power over human life at various levels– who are these institutions? Are they human? Or “artificial or imaginary” objects? The world of AI won’t be different in conceptual terms from today’s institutions.
By definition, AI includes the word “intelligence”. As we know, AI works with quantitative methods and correlations and results of algorithm which lead to “the biggest, the most frequent etc”. Can this kind of technology be “intelligent”?
I would say it depends on how we define intelligent. We have different types of AI1:
- Narrow or Weak AI: This is what is mainly used today; e.g., for facial recognition, speech recognition/voice assistants, driving a car, or searching the internet – and this technology is very intelligent at completing the specific task for which it is programmed to do.
- Strong or Deep AI: This technology has not yet been achieved; this is a type of AI for which we need to find a way to make machines conscious. Machines would have to take experiential learning to the next level, not just improving efficiency of individual tasks, but gaining the ability to apply knowledge gained from experience to a wider range of different problems. This sort of AI can be achieved in the near future as well; All we need is for a computer to be exposed to many different situations and learn from them. It might not be easily attainable for all domains, but we have made good progress regarding this type of AI in the labs in situations where AI can recognise human feelings or their tone and adapt their responses accordingly.
- The Superintelligence AI is a type of AI where machines become self-aware and surpass the capacity of human intelligence and ability. This type only exists in the world of sci-fi – at least so far; but in my opinion, even this type can eventually be achieved. After all, throughout history, we humans have created everything we dreamed of.
You work in the domain of AI and ethics and you were member of a group that developed ethical principles for AI. Are those ethical principles meant to be a general guideline for AI technologies or machines? Or are they intended to be the basis for an ISO standard or an international treaty? Will or should they be part of a company’s corporate responsibility, corporate governance or code of conduct? Or do they represent new innovation goals as a renunciation of Six Sigma? How will this translate in concrete terms: Will human beings learn and abide by the ethical principles or will ethics be a technical part of software that influence the AI process (ethics by design)?
Well, in an ideal world, ethics should be part of the structure! This is the only way we can make complete use of AI in an ethical way. However, ethics vary from nation to nation. This creates some concern and no straightforward solution has yet to be found. But will it happen at a broad level? I am not sure. My hope, and the effort of the community, has been to ensure that we have some standards which can be altered or adjusted. After all, we have rules in any society so why not in this case? The situation, however, is a bit more complex: How well can we assess just how fair and ethical these elements are? Should we destroy an AI if the algorithm is not fair? Or would we be unfairly punishing the creator? We might want to borrow some concepts from our current models for companies: When a company commits a crime on a big scale, what are the consequences?
I realise that I am asking more questions than providing answers, but I think this is the role of the community: to ask questions, to create awareness regarding unresolved issues, and to make those who come up with the rules more accountable.
As far as I am aware, machines do not possess innately human qualities nor emotions although scientists predict that it is only a matter of time before they are capable of this as well. Will the quantum computer become a game changer? Will machines develop emotions or only the result of what they have learnt will have emotional, ethical and human traits?
For Strong AI, quantum computing would be of great assistance. We simply need to increase the speed of the test data being generated and of the algorithm being exposed to that data: The faster we achieve this situation in technological terms, the sooner we arrive at the results you are talking about. Moreover, we cannot forget that for certain complex tasks, today’s computers cannot deliver the required power.
We also have to consider another matter: We not only need quantum computing to be able to have more power, but also to use less energy. Looking at today’s energy consumption and global warming, I think we can all agree that we won’t be able to live on this planet, if we extrapolate the required energy for computing needed for a concomitant increase in computer power. Therefore, we need to reduce the energy required. Such a condition can be achieved within the domain of Nano quantum computing, where, simply stated, electrons generate current without consuming any energy but only by their Brownian movement. That is another exciting topic in this domain.
There is a common understanding that a business person’s primary motivation and goal is to increase the profits of his or her business and that s/he will probably not care about noble (ethical) motivations. The same could be said of technologists, isn’t that so? Are they also only interested in fostering innovation- regardless of laws, ethics and fields of applications? Or, in other words, will and can a technologist cease to satisfy his or her curiosity because the outcome might have a negative impact? Is the orientation of a technologist only towards feasibility and producibility instead of ethical duties?
I think this might be true of some people. However, we shouldn’t generalise. Regardless of the topic, whether it be AI or something else, there are smart responsible people as well as some narcissists among us who are equally intelligent. Some will create technologies to improve the quality of life for others, while others might only create technologies for their own benefit! We have seen different applications of digital technologies, and I believe it will be the same for AI! As in every domain, everyone (fair or selfish, ethical or non-ethical) will be able to participate in this domain. This means that presumably regulating this domain will become necessary- at least in the long term.
The digital world has made the world more accessible and less dependent on the nobility. It has resolved lots of problems, but, at the same time, it has created plenty of new ones. In the world of digitisation, we could easily listen to lots of motivational TED talks, have access to education, become more aware and supportive of peaceful movements. At the same time, racist and criminal elements could also seek more support and expand more easily and faster. However, if we ask ourselves what has been the net effect of the digital world, I think we would all agree that it has been, on the whole, more positive. This development is also the next step to be taken by our civilisation; it is a one-way street and there is no way back. We have grown up in this world, developed skills which are required to live in this world, and, in few hundred years, human intelligence and brains will be completely different from a few hundred years ago.
When looking at life on a timeline, any discovery or step is just a step; it will have a past and a future, and the rise of AI will be the same.
My opinion is:
Dr Leila Taghizadeh, MBA
“AI is just another step in human development!”
One of the best examples is the magnitude of different privacy laws – we need only look at the European Community, the US and China. In order to come to a common understanding, a common set of ethics– at least in democratic states – is necessary. How can people agree on a common set of ethics if they do not have a shared history that leads them to the same/similar ethical perceptions? Or will machines and technology solve this problem because they will apply the same ethical rules so that machines and technology will teach us what is or is supposed to be ethical?
I do not think we will reach a common agreement; we will still grow regionally and will have some international umbrella rules; other regulations have developed in this way, after all. This might prohibit suing a specific AI from one region in another. I think AI – at least in the short term – will not change the nature of our civilisation. With the advent of the Super Intelligence AI, we might see the next revolution (after the agricultural and industrial revolutions). However, I do hope that we have a more accessible and fairer world by then!
Leila, thank you for sharing your insights on the compatibility of ethics and AI.
Thank you, Cristina, and I look forward to reading your upcoming interviews with recognised experts, delving even deeper into this fascinating topic.
1 https://www.investopedia.com/terms/w/weak-ai.asp