Arti­fi­cial Intel­li­gence (AI) and Ethics

A
Dr Leila Taghizadeh, MBA

Tech­nol­o­gy is com­ing clos­er and clos­er to the essen­tial and per­son­al affairs of human beings. Whether it be biotech­nol­o­gy, med­i­cine or dig­i­tal appli­ca­tions and apps that record and impact our dai­ly inter­ac­tions, they all influ­ence humans as a col­lec­tive as well as indi­vid­u­als when deci­sions are made. The call for ethics is grow­ing loud­er and loud­er. The pan­dem­ic has led to increased eth­i­cal debates sur­round­ing the big ques­tions of if, and, if so, which eth­i­cal under­stand­ing will guide human­i­ty, how ethics will affect future devel­op­ments in the dig­i­tal age and how eth­i­cal val­ues can and should be respected. 

In the lat­est of her Duet inter­views, Dr Cal­daro­la, author of Big Data and Law, and AI expert Leila Taghizadeh dis­cuss the com­pat­i­bil­i­ty of ethics and AI.

To start off our con­ver­sa­tion, can you first give our read­ers a short intro­duc­tion to what AI (Arti­fi­cial Intel­li­gence) is and how it works?

Dr Leila Taghizadeh: Basi­cal­ly, it is the intel­li­gence shown by a machine or any device built by humans. You might already con­sid­er all com­put­ers to be intel­li­gent because some of them can per­form tasks which are way beyond the abil­i­ties of even the most intel­li­gent peo­ple; a sim­ple exam­ple being the appli­ca­tions which can solve com­plex math equa­tions. How­ev­er, we are look­ing at a slight­ly dif­fer­ent sit­u­a­tion: In the world of “tra­di­tion­al” com­put­ers, a human cre­ates the algo­rithm and the pro­gram while the com­put­er only fol­lows cer­tain com­mands and exe­cutes exact­ly what it has been told to do! Imag­ine a gate where you could drop off your parcels to be processed. With a tra­di­tion­al com­put­er, you would have to define dif­fer­ent types of parcels and make them known to the machine. Oth­er­wise, the com­put­er wouldn’t know what to do when it received a box instead of an enve­lope, for exam­ple. So, in this hypo­thet­i­cal pro­gram, you define that a par­cel can be an enve­lope or a box. Now, if you try to drop off a bag, the machine wouldn’t know what to do, and, depend­ing on how it was pro­grammed, either it would sim­ply do noth­ing or it would give an error mes­sage. In the world of AI, how­ev­er, a machine looks for sim­i­lar­i­ties between a bag and a box and an enve­lope and will treat it as a par­cel. A machine is capa­ble of learning!

Is AI always dri­ven by a human being – in oth­er words, will there always be a per­son at the end of a devel­op­ment– in your arti­cle you refer to them as AI tech­nol­o­gists – who can be traced back to the out­come of AI-dri­ven tech­nol­o­gy and can, there­fore, be held account­able? Or will AI also become an inde­pen­dent result of machine learn­ing with­out the influ­ence, impe­tus or guid­ance of a human being because machines/robots will be able to improve and advance them­selves? If so, who is liable-espe­cial­ly if the cur­rent laws only attribute prof­its, loss and dam­ages to human beings?

This is a hot top­ic in the domain of AI; the reg­u­la­tions guid­ing the devel­op­ment of AI still have to catch up with the devel­op­ment of AI. The nature of AI should enable machines to oper­ate with­out human guid­ance or inter­fer­ence at some lev­el- at least at some point in the future. How­ev­er, humans will prob­a­bly still be at the begin­ning of the chain:  at that moment in time when the AI in ques­tion is being cre­at­ed. That is why we need to make sure that the rel­e­vant algo­rithms are with­out bias or dis­crim­i­na­tion. You can­not build an AI based on a dis­crim­i­na­tive algo­rithm and expect it to behave “fair­ly”. Now, here comes the prob­lem: We, as humans, have lots of con­scious and uncon­scious bias­es; we cre­ate rules in our soci­ety based on our bias­es and we rule soci­ety with our bias­es. So how can we expect AI not to have them? Although with AI, we have a big­ger prob­lem: The bias­es will be prop­a­gat­ed much faster and in big­ger dimen­sions! Can we change this? I still believe we can, but it will need to be a con­scious deci­sion and a col­lec­tive effort.

I see anoth­er inter­est­ing angle to your ques­tion which con­cerns lia­bil­i­ty. When we cre­at­ed cor­po­ra­tions and insti­tu­tions and gave them pow­er of attor­ney, how did we answer this ques­tion of lia­bil­i­ty? A regime, a gov­ern­ment, a cor­po­ra­tion has a cer­tain pow­er over human life at var­i­ous lev­els– who are these insti­tu­tions? Are they human? Or “arti­fi­cial or imag­i­nary” objects?  The world of AI won’t be dif­fer­ent in con­cep­tu­al terms from today’s institutions.

By def­i­n­i­tion, AI includes the word “intel­li­gence”. As we know, AI works with quan­ti­ta­tive meth­ods and cor­re­la­tions and results of algo­rithm which lead to “the biggest, the most fre­quent etc”. Can this kind of tech­nol­o­gy be “intel­li­gent”?

I would say it depends on how we define intel­li­gent. We have dif­fer­ent types of AI1:

  • Nar­row or Weak AI: This is what is main­ly used today; e.g., for facial recog­ni­tion, speech recognition/voice assis­tants, dri­ving a car, or search­ing the inter­net – and this tech­nol­o­gy is very intel­li­gent at com­plet­ing the spe­cif­ic task for which it is pro­grammed to do.
  • Strong or Deep AI: This tech­nol­o­gy has not yet been achieved; this is a type of AI for which we need to find a way to make machines con­scious. Machines would have to take expe­ri­en­tial learn­ing to the next lev­el, not just improv­ing effi­cien­cy of indi­vid­ual tasks, but gain­ing the abil­i­ty to apply knowl­edge gained from expe­ri­ence to a wider range of dif­fer­ent prob­lems. This sort of AI can be achieved in the near future as well; All we need is for a com­put­er to be exposed to many dif­fer­ent sit­u­a­tions and learn from them. It might not be eas­i­ly attain­able for all domains, but we have made good progress regard­ing this type of AI in the labs in sit­u­a­tions where AI can recog­nise human feel­ings or their tone and adapt their respons­es accordingly.
  • The Super­in­tel­li­gence AI is a type of AI where machines become self-aware and sur­pass the capac­i­ty of human intel­li­gence and abil­i­ty. This type only exists in the world of sci-fi – at least so far; but in my opin­ion, even this type can even­tu­al­ly be achieved. After all, through­out his­to­ry, we humans have cre­at­ed every­thing we dreamed of.

You work in the domain of AI and ethics and you were mem­ber of a group that devel­oped eth­i­cal prin­ci­ples for AI. Are those eth­i­cal prin­ci­ples meant to be a gen­er­al guide­line for AI tech­nolo­gies or machines? Or are they intend­ed to be the basis for an ISO stan­dard or an inter­na­tion­al treaty? Will or should they be part of a company’s cor­po­rate respon­si­bil­i­ty, cor­po­rate gov­er­nance or code of con­duct? Or do they rep­re­sent new inno­va­tion goals as a renun­ci­a­tion of Six Sig­ma? How will this trans­late in con­crete terms: Will human beings learn and abide by the eth­i­cal prin­ci­ples or will ethics be a tech­ni­cal part of soft­ware that influ­ence the AI process (ethics by design)?

Well, in an ide­al world, ethics should be part of the struc­ture! This is the only way we can make com­plete use of AI in an eth­i­cal way. How­ev­er, ethics vary from nation to nation. This cre­ates some con­cern and no straight­for­ward solu­tion has yet to be found.  But will it hap­pen at a broad lev­el? I am not sure. My hope, and the effort of the com­mu­ni­ty, has been to ensure that we have some stan­dards which can be altered or adjust­ed. After all, we have rules in any soci­ety so why not in this case? The sit­u­a­tion, how­ev­er, is a bit more com­plex: How well can we assess just how fair and eth­i­cal these ele­ments are? Should we destroy an AI if the algo­rithm is not fair? Or would we be unfair­ly pun­ish­ing the cre­ator? We might want to bor­row some con­cepts from our cur­rent mod­els for com­pa­nies: When a com­pa­ny com­mits a crime on a big scale, what are the consequences?

I realise that I am ask­ing more ques­tions than pro­vid­ing answers, but I think this is the role of the com­mu­ni­ty: to ask ques­tions, to cre­ate aware­ness regard­ing unre­solved issues, and to make those who come up with the rules more accountable.

As far as I am aware, machines do not pos­sess innate­ly human qual­i­ties nor emo­tions although sci­en­tists pre­dict that it is only a mat­ter of time before they are capa­ble of this as well. Will the quan­tum com­put­er become a game chang­er? Will machines devel­op emo­tions or only the result of what they have learnt will have emo­tion­al, eth­i­cal and human traits?

For Strong AI, quan­tum com­put­ing would be of great assis­tance. We sim­ply need to increase the speed of the test data being gen­er­at­ed and of the algo­rithm being exposed to that data:  The faster we achieve this sit­u­a­tion in tech­no­log­i­cal terms, the soon­er we arrive at the results you are talk­ing about. More­over, we can­not for­get that for cer­tain com­plex tasks, today’s com­put­ers can­not deliv­er the required power.

We also have to con­sid­er anoth­er mat­ter:  We not only need quan­tum com­put­ing to be able to have more pow­er, but also to use less ener­gy. Look­ing at today’s ener­gy con­sump­tion and glob­al warm­ing, I think we can all agree that we won’t be able to live on this plan­et, if we extrap­o­late the required ener­gy for com­put­ing need­ed for a con­comi­tant increase in com­put­er pow­er. There­fore, we need to reduce the ener­gy required. Such a con­di­tion can be achieved with­in the domain of Nano quan­tum com­put­ing, where, sim­ply stat­ed, elec­trons gen­er­ate cur­rent with­out con­sum­ing any ener­gy but only by their Brown­ian move­ment. That is anoth­er excit­ing top­ic in this domain.

There is a com­mon under­stand­ing that a busi­ness person’s pri­ma­ry moti­va­tion and goal is to increase the prof­its of his or her busi­ness and that s/he will prob­a­bly not care about noble (eth­i­cal) moti­va­tions. The same could be said of tech­nol­o­gists, isn’t that so? Are they also only inter­est­ed in fos­ter­ing inno­va­tion- regard­less of laws, ethics and fields of appli­ca­tions? Or, in oth­er words, will and can a tech­nol­o­gist cease to sat­is­fy his or her curios­i­ty because the out­come might have a neg­a­tive impact? Is the ori­en­ta­tion of a tech­nol­o­gist only towards fea­si­bil­i­ty and pro­ducibil­i­ty instead of eth­i­cal duties?

I think this might be true of some peo­ple.  How­ev­er, we shouldn’t gen­er­alise. Regard­less of the top­ic, whether it be AI or some­thing else, there are smart respon­si­ble peo­ple as well as some nar­cis­sists among us who are equal­ly intel­li­gent. Some will cre­ate tech­nolo­gies to improve the qual­i­ty of life for oth­ers, while oth­ers might only cre­ate tech­nolo­gies for their own ben­e­fit! We have seen dif­fer­ent appli­ca­tions of dig­i­tal tech­nolo­gies, and I believe it will be the same for AI! As in every domain, every­one (fair or self­ish, eth­i­cal or non-eth­i­cal) will be able to par­tic­i­pate in this domain. This means that pre­sum­ably reg­u­lat­ing this domain will become nec­es­sary- at least in the long term.

The dig­i­tal world has made the world more acces­si­ble and less depen­dent on the nobil­i­ty. It has resolved lots of prob­lems, but, at the same time, it has cre­at­ed plen­ty of new ones. In the world of digi­ti­sa­tion, we could eas­i­ly lis­ten to lots of moti­va­tion­al TED talks, have access to edu­ca­tion, become more aware and sup­port­ive of peace­ful move­ments. At the same time, racist and crim­i­nal ele­ments could also seek more sup­port and expand more eas­i­ly and faster. How­ev­er, if we ask our­selves what has been the net effect of the dig­i­tal world, I think we would all agree that it has been, on the whole, more pos­i­tive. This devel­op­ment is also the next step to be tak­en by our civil­i­sa­tion; it is a one-way street and there is no way back. We have grown up in this world, devel­oped skills which are required to live in this world, and, in few hun­dred years, human intel­li­gence and brains will be com­plete­ly dif­fer­ent from a few hun­dred years ago.

When look­ing at life on a time­line, any dis­cov­ery or step is just a step; it will have a past and a future, and the rise of AI will be the same.

My opin­ion is:

“AI is just anoth­er step in human development!”

Dr Leila Taghizadeh, MBA

One of the best exam­ples is the mag­ni­tude of dif­fer­ent pri­va­cy laws – we need only look at the Euro­pean Com­mu­ni­ty, the US and Chi­na. In order to come to a com­mon under­stand­ing, a com­mon set of ethics– at least in demo­c­ra­t­ic states – is nec­es­sary. How can peo­ple agree on a com­mon set of ethics if they do not have a shared his­to­ry that leads them to the same/similar eth­i­cal per­cep­tions? Or will machines and tech­nol­o­gy solve this prob­lem because they will apply the same eth­i­cal rules so that machines and tech­nol­o­gy will teach us what is or is sup­posed to be ethical?

I do not think we will reach a com­mon agree­ment; we will still grow region­al­ly and will have some inter­na­tion­al umbrel­la rules; oth­er reg­u­la­tions have devel­oped in this way, after all. This might pro­hib­it suing a spe­cif­ic AI from one region in anoth­er. I think AI – at least in the short term – will not change the nature of our civil­i­sa­tion. With the advent of the Super Intel­li­gence AI, we might see the next rev­o­lu­tion (after the agri­cul­tur­al and indus­tri­al rev­o­lu­tions). How­ev­er, I do hope that we have a more acces­si­ble and fair­er world by then!

Leila, thank you for shar­ing your insights on the com­pat­i­bil­i­ty of ethics and AI.

Thank you, Cristi­na, and I look for­ward to read­ing your upcom­ing inter­views with recog­nised experts, delv­ing even deep­er into this fas­ci­nat­ing topic.


1 https://​www​.investo​pe​dia​.com/​t​e​r​m​s​/​w​/​w​e​a​k​-​a​i​.​asp

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Dr Leila Taghizadeh, MBA

Dr Taghizadeh is passionate about solving problems. She sees AI as a new tool capable of solving lots of problems that are currently bedevilling us. Dr Taghizadeh supports senior management in incorporating cyber security and digital risks into business matters and providing business solutions for those issues by following the maxim "If you cannot measure it, you cannot manage it". Leila is a TRIUM Global Executive MBA graduate as well as a PhD in Physics.
She is head of Cyber Security Risk at Allianz. Previously, she was an advisor for the Chief Product Officer at SWIFT on enterprise risk management. Prior to that, Leila worked as Human Cyber Security Manager where she was responsible for defining, executing and reporting on security awareness strategies.
Dr Taghizadeh has lived and worked in 8 countries, runs marathons (albeit at her own pace!) and dreams of a world with diversity and inclusion in its DNA.

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

FOL­LOW ME