Is AI capa­ble of think­ing on its own or learn­ing how to think?

I
Prof. Dr Ralf Otte

In the lat­est of her Duet inter­views, Dr Cal­daro­la, edi­tor of Data Ware­house as well as co-author of Big Data and Law, and AI Expert Prof. Dr Ralf Otte con­sid­er the issue of AI’s capac­i­ty to think.

Over and over again, claims have been made from var­i­ous sides that AI pos­sess­es the abil­i­ty to think. What is cru­cial for this capac­i­ty and what is meant by “think­ing”?

Prof. Dr Otte: A very dif­fi­cult ques­tion that has been posed for cen­turies- if not mil­len­nia. “Think­ing” in the con­text of AI – the so-called deductive/thinking AI – is now defined as “draw­ing log­i­cal­ly cor­rect con­clu­sions”. Infor­ma­tion is avail­able that is assumed to be true and leads to new knowl­edge through log­ic (either propo­si­tion­al log­ic or first-order pred­i­cate log­ic). This is “think­ing” in the sense of AI. AI has been capa­ble of this since 1956.

Experts are now dis­cussing whether this is “think­ing” or a “sim­u­la­tion of think­ing”. We engi­neers in the field of AI no longer dis­tin­guish between them because the log­i­cal­ly cor­rect con­clu­sion is now so good that we equate sim­u­la­tion with thinking.

A psy­chol­o­gist or neu­ro­phys­i­ol­o­gist, on the oth­er hand, would refuse to con­sid­er this to be actu­al think­ing.  This is also true because a machine “thinks” dif­fer­ent­ly than a human. While a machine thinks exclu­sive­ly in log­i­cal terms, humans have oth­er pos­si­bil­i­ties of think­ing at their disposal.

In your opin­ion, is there an impor­tant or deci­sive dif­fer­ence between human think­ing and the “think­ing” of AI?

The think­ing of a machine and of a human being is com­plete­ly dif­fer­ent. The AI on a machine is mapped in pure­ly math­e­mat­i­cal terms. Laypeo­ple often think that a neur­al net­work is more than math­e­mat­ics. This is incor­rect because a neur­al net­work can be described by lin­ear alge­bra. It’s always about tables with zeros and ones, and they can be linked so well today that math­e­mat­i­cal algo­rithms are cre­at­ed that sim­u­late think­ing. Math­e­mat­i­cal algo­rithms run in the form of software.

It’s dif­fer­ent with humans. There is often the fal­la­cy that algo­rithms are also run­ning in the brain – a form of com­put­er. This is sim­ply wrong. A per­son­’s brain is not a com­put­er on which soft­ware runs. There is no math­e­mat­ics going on in the brain. This makes the dif­fer­ences to AI’s think­ing obvious.

My favourite quote:

„If you think that a machine thinks, you think like a machine.”

Frieder Nake

When we send chil­dren to school, it takes ten to twelve years for the chem­i­cal-phys­i­cal process­es of the brain to be mod­u­lat­ed in such a way that the brain is capa­ble of math­e­mat­i­cal mod­u­la­tion at all. The addi­tion of two num­bers is sim­ply not a nat­ur­al fea­ture of the brain. The human brain has to learn math­e­mat­ics, i.e. the chem­i­cal-phys­i­cal process­es of a brain have to be mod­u­lat­ed in such a way that in the end the addi­tion of two num­bers becomes possible.

This has a huge impact because the AI on a com­put­er has to adhere to math­e­mat­i­cal laws because AI is pure math­e­mat­ics. A per­son­’s intel­li­gence can adhere to math­e­mat­i­cal laws, but it does not have to. Lay­men believe that AI is there­fore bet­ter than the human brain. This is also incor­rect because math­e­mat­ics has many limitations.

Rather, in addi­tion to math­e­mat­ics and the use of AI up to its math­e­mat­i­cal lim­it, humans have oth­er options at their dis­pos­al, which include per­cep­tion and learn­ing via the chem­i­cal-phys­i­cal pos­si­bil­i­ties of the brain.

A fly is intel­li­gent because it can search for and find food, it can repro­duce, etc. This intel­li­gence is basal­ly encod­ed in the fly. If a fly gets lost in a liv­ing space, it has a prob­lem because it can no longer leave the room, because it flies con­tin­u­ous­ly to the win­dow pane and can­not learn how to adapt to the new envi­ron­ment for repro­duc­tion, food intake and escape from a liv­ing space. The fly knows no win­dow panes and thinks it can see through and fly through every­thing with its log­i­cal­ly cor­rect con­clu­sions from nature. This is how it is designed by evo­lu­tion. The fly will die because it does not have the next lev­el of intel­li­gence, which is learn­ing how to escape from a liv­ing space.

Learn­ing is when you get stuck using your log­i­cal con­clu­sions. A dog or even a cat will bump its head against the win­dow the first time. How­ev­er, they learn that they can see through a win­dow pane but can­not walk through it and con­se­quent­ly will not bump into it again. They have a high­er lev­el of intel­li­gence than the fly because they can learn. This is also called induc­tion, because the liv­ing being can learn rules on its own by observ­ing the envi­ron­ment. It’s dif­fer­ent with think­ing, because there are rules that can be derived log­i­cal­ly. AI can also be a learn­ing AI (neur­al net­works, deep learn­ing, deci­sion trees, asso­ci­a­tion rules). How­ev­er, this machine-learn­ing AI is also pure­ly math­e­mat­i­cal and con­tin­ues to expe­ri­ence the lim­its of math­e­mat­ics. If AI has learned some­thing in a room, you can cre­ate new facts through interpolation.

An exam­ple of this could be a child learn­ing to add cer­tain exam­ple num­bers between 1 and 100. After a few times, the child can add all the num­bers between 1 and 100 and even beyond. The inter­po­la­tion of a child is excel­lent. I can teach AI this inter­po­la­tion in the num­ber range between 1 and 100 with about 30 numer­i­cal exam­ples. But if I extend the range of num­bers to 1000 to 2000, then it is so far away from the learned data space that the answer of an AI is always wrong. That’s why AI is only right inside the inter­po­la­tion space and always or almost wrong out­side. With Chat­G­PT, this is called hal­lu­ci­nat­ing. Unfor­tu­nate­ly, the word “hal­lu­ci­nate” triv­i­alis­es the prob­lem, because the results of AI are always wrong. And that can have hor­rif­ic con­se­quences, for exam­ple, in the case of a hal­lu­ci­nat­ing disc brake. That’s why so many engi­neers can’t under­stand why so many hal­lu­ci­na­tions are cur­rent­ly tol­er­at­ed on Chat­G­PT (cur­rent­ly at 25 per­cent). This would not be tol­er­a­ble in any oth­er area, espe­cial­ly vehi­cles, nuclear pow­er plants, etc., and should not be placed on the mar­ket, because the tol­er­at­ed AI error rate there is 0.1 per­cent. That’s why the AI always has to tell me whether it’s work­ing inside or out­side the inter­po­la­tion space. If the lat­ter case is true, there is immi­nent dan­ger for me.

Humans can learn with­in an inter­po­la­tion space and apply this in an extrap­o­la­tion space, while AI can­not. For exam­ple, peo­ple learn to dri­ve a car in a small city like Hei­del­berg (inter­po­la­tion space) and can then also dri­ve a car in a big city like Frank­furt (extrap­o­la­tion space). An AI in the vehi­cle would not be able to do that. The AI sys­tem can only be oper­at­ed well in an inter­po­la­tion space and not in an extrap­o­la­tion space. AI will there­fore nev­er fly an air­plane, dri­ve autonomous­ly, nor admin­is­ter jus­tice… These are the math­e­mat­i­cal lim­its of AI, because they require sec­ond-order pred­i­cate log­ic. There­fore, peo­ple need to be sen­si­tised to where they can use­ful­ly use AI, name­ly, in the field of deduc­tive AI or induc­tive AI with­in the inter­po­la­tion space. AI can only deal with propo­si­tion­al log­ic and first-order pred­i­cate log­ic, and not with oth­er types of log­ic such as fuzzy log­ic, modal log­ic, quan­tum log­ic and so on.

What can AI do besides pat­tern recog­ni­tion and sta­tis­ti­cal statements?

Log­i­cal con­clu­sions. AI has var­i­ous lev­els of intel­li­gence: deduc­tion, induc­tion, cog­ni­tion, per­cep­tion, self-per­cep­tion, feel­ings, will, and reflec­tion. Each of these lev­els has dif­fer­ent strengths. Deduc­tion can rea­son log­i­cal­ly, induc­tion can make sta­tis­ti­cal state­ments, and cog­ni­tion merges the two.

When a two-year-old child sits under an apple tree in the gar­den in autumn, he or she can observe how indi­vid­ual apples fall from the tree or how a strong gust of wind makes a lot of apples fall to the ground. We are in the field of induc­tion here – i.e. learn­ing that all apples fall from the tree in autumn. A child as well as AI can do this. A child will use deduc­tion and con­clude from it that all pears will also fall from the tree in autumn, even if s/he has nev­er observed this process in detail. It will then look for a pear tree and eval­u­ate and ver­i­fy this con­clu­sion. In this way, humans use the alter­na­tion of deduc­tion and induc­tion, which AI can­not do.

AI can learn sep­a­rate­ly from sta­tis­ti­cal data and draw dis­tinct log­i­cal con­clu­sions. It can hard­ly merge induc­tion and deduc­tion – i.e. cog­ni­tion – at all. AI can learn the syn­tax of the Ger­man and Eng­lish lan­guages on the basis of 30 bil­lion sen­tences. That’s what Chat­G­PT has done. How­ev­er, it can­not enter the extrap­o­la­tion space with these lan­guages. Then it hal­lu­ci­nates. A human can be wrong in the cog­ni­tive realm, while AI is always wrong. Humans are there­fore much fur­ther ahead than AI in the above-men­tioned intel­li­gence lev­els. AI can’t do every­thing and will remain much dumb­er and will still be in 100 years. It is sim­ply math­e­mat­i­cal­ly lim­it­ed, even if it can cal­cu­late and write faster.

Can or will the AI process­es of com­plex self-organ­is­ing sys­tems be able to pre­dict? E.g. a vol­canic erup­tion, the Ukraine war or Covid?

No! This is only pos­si­ble if the vol­canic erup­tion, the Ukraine war or Covid is sub­ject to a deter­min­is­tic law. And such events are not! So, when it comes to prob­a­bilis­tic process­es, such as the weath­er, lot­tery num­bers… then noth­ing can be pre­dict­ed, because they are not math­e­mat­i­cal­ly rep­re­sentable process­es. In addi­tion to prob­a­bilis­tic and deter­min­is­tic laws, there are also mixed forms, such as chaot­ic process­es (e.g. the stock exchange). Chaot­ic means that there is a math­e­mat­i­cal for­mu­la under­neath that we do not know. Here, AI is rel­a­tive­ly good. Vol­canic erup­tions can only be pre­dict­ed if the vol­canic erup­tion inevitably results from vibra­tion pat­terns. The more vol­canic erup­tions are stud­ied in the future, the more prob­a­bilis­tic they will become and the less promis­ing a pre­dic­tion with AI will be.

We do not all live in one and the same real­i­ty and this real­i­ty is not recog­nis­able to all of us in the same way because every per­son sens­es, feels, wants and thinks dif­fer­ent­ly, because every­one expe­ri­ences, learns and reduces the com­plex­i­ty of their envi­ron­ment dif­fer­ent­ly in a dis­tinct way. This repeat­ed­ly leads to seri­ous con­flicts. There­fore, it would be impor­tant to accept that each per­son lives in his or her own real­i­ty. Can – and if yes how – AI help humans to learn and under­stand each oth­er here?

I do not agree with what you are say­ing. There are dif­fer­ent lev­els of truth and these also dif­fer with­in the dif­fer­ent pro­fes­sions. There are con­cepts of truth of a philoso­pher, lawyer, sci­en­tist, cler­gy­man… which are all different.

For exam­ple, Robert Habeck, as a philoso­pher, has a con­cept of truth of con­sen­sus or plau­si­bil­i­ty. A philoso­pher, when he has dis­cov­ered some­thing, needs a log­i­cal deduc­tion and a con­sen­sus among his philo­soph­i­cal col­leagues. If he does not find a con­sen­sus and receives too much crit­i­cism, then he would have to reject his discovery.

If an engi­neer builds a bridge that, accord­ing to the engi­neer, can with­stand a 40-ton truck, then the engi­neer will have such a truck dri­ve over the bridge, see what hap­pens and thus get the proof of his/her the­sis. That would be the truth for an engi­neer (also called equiv­a­lence truth). As sci­en­tists, we are there­fore for­tu­nate to be able to work at the high­est lev­el of truth, because we can estab­lish a hypoth­e­sis and val­i­date it direct­ly through test­ing. If I change the hypoth­e­sis and say the bridge will last 100 years, then I can’t prove it imme­di­ate­ly, because in 100 years I won’t be alive any­more. I have to start a proof dif­fer­ent­ly by using a con­gru­ence truth, because I can prove that all deduc­tive laws of grav­i­ty, build­ing physics etc have been tak­en into account.

But there are also the truths of pow­er, because some­one says and asserts some­thing. Just think of Chris­t­ian Drosten, who declared var­i­ous hypothe­ses to be cor­rect dur­ing the Coro­na cri­sis. That’s why we engi­neers have such a hard time with the new heat­ing law, because we know that it can’t work accord­ing to the laws of physics, because heat­ing at minus 10 degrees Cel­sius with a heat pump does­n’t work.

It is true that peo­ple have their own truths spe­cif­ic to their own lives. How­ev­er, this con­cept of truth should not be trans­ferred to sci­ence and tech­nol­o­gy – i.e. AI systems.

AI knows no feel­ings and emo­tions, even though robots recog­nise facial expres­sions, infer feel­ings and emo­tions, and respond to humans with a rich selec­tion of behav­iours. How and which of these behav­iours should AI select? Is this the new “logique du coeur”?

It is pos­si­ble to train AI on facial expres­sions, behav­iours, expres­sions, so that it can clas­si­fy whether some­one is sad, hap­py, angry, etc. Even if imi­ta­tion is tech­ni­cal­ly pos­si­ble, it is not entire­ly allowed under the new AI Act.

AI does not have its own feel­ings and will not be endowed with them. Why do liv­ing beings have feel­ings? Chil­dren usu­al­ly touch the hot stove only once, feel the pain and refrain from repeat­ing it because of this feel­ing of pain. In order for AI to learn not to come into con­tact with a hot stove, AI does not need a sin­gle event like humans, but rather at least 1000 train­ing data. Humans are there­fore a “small data devel­op­er”, while AI is a “big data developer”.

In many respects, what is being done today in the field of AI devel­op­ment is absurd. It is a waste of resources, espe­cial­ly of ener­gy, and is in no way sus­tain­able, envi­ron­men­tal­ly friend­ly, etc. AI needs an aver­age of 10 thou­sand times more train­ing sets to reach the lev­el of a human – only then does Chat­G­PT speak as well as humans. This also shows us where AI can­not be used – name­ly where the required data is not avail­able or not in the required quantity.

AI will not defeat world hunger. There is enough food to pro­vide peo­ple with suf­fi­cient food. AI is over­rat­ed, as I have explained, because it has its math­e­mat­i­cal limits.

It is used today as a “mon­ey print­ing machine” because peo­ple get rich very quick­ly with it. If a start-up adver­tis­es using the word AI, it has a bet­ter chance of find­ing start­ing points… and is rat­ed bet­ter and earns more mon­ey. If I talk about AI today instead of the mere math­e­mat­ics behind AI, I earn 10 times as much.

Big Tech is sell­ing a dream to board mem­bers and jour­nal­ists around the world. They swal­low it whole because, as mate­ri­al­ists, they are geared towards prof­it. The world can­not be explained by algo­rithms alone. There is also spir­i­tu­al­i­ty, even if it sup­pos­ed­ly does not occur in today’s athe­is­tic world. If soci­ety real­ly believes that today’s crises and chal­lenges can be over­come with AI and Big Tech still sup­ports this dream with the argu­ment that AI has no feel­ings and there­fore can­not and will not act with evil intent, you are mistaken.

AI won’t find the best rules for the world. Sys­tems like the Ger­man state will not be able to be con­trolled algo­rith­mi­cal­ly. Mean­while, AI expos­es us to more risks than oppor­tu­ni­ties (see admin­is­tra­tion, edu­ca­tion…). We are at a tip­ping point where AI can also do a lot of non­sense and dam­age. The hype sur­round­ing AI is mere­ly an immense adver­tis­ing suc­cess, noth­ing else.

Tech­nol­o­gy (e.g. the car) has nev­er made any­thing cheap­er or at least I don’t know of any such case. How absurd it is that “CoPi­lot” can and will help me with any­thing. How absurd is it that the data in an elec­tron­ic patient record can pos­si­bly increase the chances of a can­cer being cured? It just makes me want to laugh.

Despite the glob­al AI hype, I am still in a pos­i­tive mood, because AI will not solve sec­ond-order pred­i­cate log­ic and that will burst the “bub­ble”. The pres­ence of fake news is also a bless­ing, because the Inter­net will serve less and less as a source of infor­ma­tion and a retreat to the ana­logue world will return direct­ly to the source (again).

Prof. Dr Otte, thank you for shar­ing your insights on AI.

Thank you, Dr Cal­daro­la, and I look for­ward to read­ing your upcom­ing inter­views with recog­nised experts, delv­ing even deep­er into this fas­ci­nat­ing topic.

About me and my guest

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

Prof. Dr Ralf Otte

Dr Ralf Otte is an engineer and professor of industrial automation and artificial intelligence at Ulm University of Applied Sciences. He has been working on AI projects in industry and society for 30 years. His current work focuses on researching consciousness, in particular, artificial consciousness on neuromorphic machines. Technical applications for such "machine consciousness" can be found in the field of Small Data and Computer Vision.

Dr Maria Cristina Caldarola

Dr Maria Cristina Caldarola, LL.M., MBA is the host of “Duet Interviews”, co-founder and CEO of CU³IC UG, a consultancy specialising in systematic approaches to innovation, such as algorithmic IP data analysis and cross-industry search for innovation solutions.

Cristina is a well-regarded legal expert in licensing, patents, trademarks, domains, software, data protection, cloud, big data, digital eco-systems and industry 4.0.

A TRIUM MBA, Cristina is also a frequent keynote speaker, a lecturer at St. Gallen, and the co-author of the recently published Big Data and Law now available in English, German and Mandarin editions.

FOL­LOW ME