
The greatest achievement of humankind is its enormous ability to find its way in this highly complex world, to make it usable for itself and to develop itself in the process. Is the human brain a model for AI?
In the latest of her Duet interviews, Dr Caldarola, editor of Data Warehouse as well as co-author of Big Data and Law, and AI Expert Prof. Dr Ralf Otte consider the issue of AI’s capacity to think.
Over and over again, claims have been made from various sides that AI possesses the ability to think. What is crucial for this capacity and what is meant by “thinking”?
Prof. Dr Otte: A very difficult question that has been posed for centuries- if not millennia. “Thinking” in the context of AI – the so-called deductive/thinking AI – is now defined as “drawing logically correct conclusions”. Information is available that is assumed to be true and leads to new knowledge through logic (either propositional logic or first-order predicate logic). This is “thinking” in the sense of AI. AI has been capable of this since 1956.
Experts are now discussing whether this is “thinking” or a “simulation of thinking”. We engineers in the field of AI no longer distinguish between them because the logically correct conclusion is now so good that we equate simulation with thinking.
A psychologist or neurophysiologist, on the other hand, would refuse to consider this to be actual thinking. This is also true because a machine “thinks” differently than a human. While a machine thinks exclusively in logical terms, humans have other possibilities of thinking at their disposal.
In your opinion, is there an important or decisive difference between human thinking and the “thinking” of AI?
The thinking of a machine and of a human being is completely different. The AI on a machine is mapped in purely mathematical terms. Laypeople often think that a neural network is more than mathematics. This is incorrect because a neural network can be described by linear algebra. It’s always about tables with zeros and ones, and they can be linked so well today that mathematical algorithms are created that simulate thinking. Mathematical algorithms run in the form of software.
It’s different with humans. There is often the fallacy that algorithms are also running in the brain – a form of computer. This is simply wrong. A person’s brain is not a computer on which software runs. There is no mathematics going on in the brain. This makes the differences to AI’s thinking obvious.
My favourite quote:
Frieder Nake
„If you think that a machine thinks, you think like a machine.”
When we send children to school, it takes ten to twelve years for the chemical-physical processes of the brain to be modulated in such a way that the brain is capable of mathematical modulation at all. The addition of two numbers is simply not a natural feature of the brain. The human brain has to learn mathematics, i.e. the chemical-physical processes of a brain have to be modulated in such a way that in the end the addition of two numbers becomes possible.
This has a huge impact because the AI on a computer has to adhere to mathematical laws because AI is pure mathematics. A person’s intelligence can adhere to mathematical laws, but it does not have to. Laymen believe that AI is therefore better than the human brain. This is also incorrect because mathematics has many limitations.
Rather, in addition to mathematics and the use of AI up to its mathematical limit, humans have other options at their disposal, which include perception and learning via the chemical-physical possibilities of the brain.
A fly is intelligent because it can search for and find food, it can reproduce, etc. This intelligence is basally encoded in the fly. If a fly gets lost in a living space, it has a problem because it can no longer leave the room, because it flies continuously to the window pane and cannot learn how to adapt to the new environment for reproduction, food intake and escape from a living space. The fly knows no window panes and thinks it can see through and fly through everything with its logically correct conclusions from nature. This is how it is designed by evolution. The fly will die because it does not have the next level of intelligence, which is learning how to escape from a living space.
Learning is when you get stuck using your logical conclusions. A dog or even a cat will bump its head against the window the first time. However, they learn that they can see through a window pane but cannot walk through it and consequently will not bump into it again. They have a higher level of intelligence than the fly because they can learn. This is also called induction, because the living being can learn rules on its own by observing the environment. It’s different with thinking, because there are rules that can be derived logically. AI can also be a learning AI (neural networks, deep learning, decision trees, association rules). However, this machine-learning AI is also purely mathematical and continues to experience the limits of mathematics. If AI has learned something in a room, you can create new facts through interpolation.
An example of this could be a child learning to add certain example numbers between 1 and 100. After a few times, the child can add all the numbers between 1 and 100 and even beyond. The interpolation of a child is excellent. I can teach AI this interpolation in the number range between 1 and 100 with about 30 numerical examples. But if I extend the range of numbers to 1000 to 2000, then it is so far away from the learned data space that the answer of an AI is always wrong. That’s why AI is only right inside the interpolation space and always or almost wrong outside. With ChatGPT, this is called hallucinating. Unfortunately, the word “hallucinate” trivialises the problem, because the results of AI are always wrong. And that can have horrific consequences, for example, in the case of a hallucinating disc brake. That’s why so many engineers can’t understand why so many hallucinations are currently tolerated on ChatGPT (currently at 25 percent). This would not be tolerable in any other area, especially vehicles, nuclear power plants, etc., and should not be placed on the market, because the tolerated AI error rate there is 0.1 percent. That’s why the AI always has to tell me whether it’s working inside or outside the interpolation space. If the latter case is true, there is imminent danger for me.
Humans can learn within an interpolation space and apply this in an extrapolation space, while AI cannot. For example, people learn to drive a car in a small city like Heidelberg (interpolation space) and can then also drive a car in a big city like Frankfurt (extrapolation space). An AI in the vehicle would not be able to do that. The AI system can only be operated well in an interpolation space and not in an extrapolation space. AI will therefore never fly an airplane, drive autonomously, nor administer justice… These are the mathematical limits of AI, because they require second-order predicate logic. Therefore, people need to be sensitised to where they can usefully use AI, namely, in the field of deductive AI or inductive AI within the interpolation space. AI can only deal with propositional logic and first-order predicate logic, and not with other types of logic such as fuzzy logic, modal logic, quantum logic and so on.
What can AI do besides pattern recognition and statistical statements?
Logical conclusions. AI has various levels of intelligence: deduction, induction, cognition, perception, self-perception, feelings, will, and reflection. Each of these levels has different strengths. Deduction can reason logically, induction can make statistical statements, and cognition merges the two.
When a two-year-old child sits under an apple tree in the garden in autumn, he or she can observe how individual apples fall from the tree or how a strong gust of wind makes a lot of apples fall to the ground. We are in the field of induction here – i.e. learning that all apples fall from the tree in autumn. A child as well as AI can do this. A child will use deduction and conclude from it that all pears will also fall from the tree in autumn, even if s/he has never observed this process in detail. It will then look for a pear tree and evaluate and verify this conclusion. In this way, humans use the alternation of deduction and induction, which AI cannot do.
AI can learn separately from statistical data and draw distinct logical conclusions. It can hardly merge induction and deduction – i.e. cognition – at all. AI can learn the syntax of the German and English languages on the basis of 30 billion sentences. That’s what ChatGPT has done. However, it cannot enter the extrapolation space with these languages. Then it hallucinates. A human can be wrong in the cognitive realm, while AI is always wrong. Humans are therefore much further ahead than AI in the above-mentioned intelligence levels. AI can’t do everything and will remain much dumber and will still be in 100 years. It is simply mathematically limited, even if it can calculate and write faster.
Can or will the AI processes of complex self-organising systems be able to predict? E.g. a volcanic eruption, the Ukraine war or Covid?
No! This is only possible if the volcanic eruption, the Ukraine war or Covid is subject to a deterministic law. And such events are not! So, when it comes to probabilistic processes, such as the weather, lottery numbers… then nothing can be predicted, because they are not mathematically representable processes. In addition to probabilistic and deterministic laws, there are also mixed forms, such as chaotic processes (e.g. the stock exchange). Chaotic means that there is a mathematical formula underneath that we do not know. Here, AI is relatively good. Volcanic eruptions can only be predicted if the volcanic eruption inevitably results from vibration patterns. The more volcanic eruptions are studied in the future, the more probabilistic they will become and the less promising a prediction with AI will be.
We do not all live in one and the same reality and this reality is not recognisable to all of us in the same way because every person senses, feels, wants and thinks differently, because everyone experiences, learns and reduces the complexity of their environment differently in a distinct way. This repeatedly leads to serious conflicts. Therefore, it would be important to accept that each person lives in his or her own reality. Can – and if yes how – AI help humans to learn and understand each other here?
I do not agree with what you are saying. There are different levels of truth and these also differ within the different professions. There are concepts of truth of a philosopher, lawyer, scientist, clergyman… which are all different.
For example, Robert Habeck, as a philosopher, has a concept of truth of consensus or plausibility. A philosopher, when he has discovered something, needs a logical deduction and a consensus among his philosophical colleagues. If he does not find a consensus and receives too much criticism, then he would have to reject his discovery.
If an engineer builds a bridge that, according to the engineer, can withstand a 40-ton truck, then the engineer will have such a truck drive over the bridge, see what happens and thus get the proof of his/her thesis. That would be the truth for an engineer (also called equivalence truth). As scientists, we are therefore fortunate to be able to work at the highest level of truth, because we can establish a hypothesis and validate it directly through testing. If I change the hypothesis and say the bridge will last 100 years, then I can’t prove it immediately, because in 100 years I won’t be alive anymore. I have to start a proof differently by using a congruence truth, because I can prove that all deductive laws of gravity, building physics etc have been taken into account.
But there are also the truths of power, because someone says and asserts something. Just think of Christian Drosten, who declared various hypotheses to be correct during the Corona crisis. That’s why we engineers have such a hard time with the new heating law, because we know that it can’t work according to the laws of physics, because heating at minus 10 degrees Celsius with a heat pump doesn’t work.
It is true that people have their own truths specific to their own lives. However, this concept of truth should not be transferred to science and technology – i.e. AI systems.
AI knows no feelings and emotions, even though robots recognise facial expressions, infer feelings and emotions, and respond to humans with a rich selection of behaviours. How and which of these behaviours should AI select? Is this the new “logique du coeur”?
It is possible to train AI on facial expressions, behaviours, expressions, so that it can classify whether someone is sad, happy, angry, etc. Even if imitation is technically possible, it is not entirely allowed under the new AI Act.
AI does not have its own feelings and will not be endowed with them. Why do living beings have feelings? Children usually touch the hot stove only once, feel the pain and refrain from repeating it because of this feeling of pain. In order for AI to learn not to come into contact with a hot stove, AI does not need a single event like humans, but rather at least 1000 training data. Humans are therefore a “small data developer”, while AI is a “big data developer”.
In many respects, what is being done today in the field of AI development is absurd. It is a waste of resources, especially of energy, and is in no way sustainable, environmentally friendly, etc. AI needs an average of 10 thousand times more training sets to reach the level of a human – only then does ChatGPT speak as well as humans. This also shows us where AI cannot be used – namely where the required data is not available or not in the required quantity.
AI will not defeat world hunger. There is enough food to provide people with sufficient food. AI is overrated, as I have explained, because it has its mathematical limits.
It is used today as a “money printing machine” because people get rich very quickly with it. If a start-up advertises using the word AI, it has a better chance of finding starting points… and is rated better and earns more money. If I talk about AI today instead of the mere mathematics behind AI, I earn 10 times as much.
Big Tech is selling a dream to board members and journalists around the world. They swallow it whole because, as materialists, they are geared towards profit. The world cannot be explained by algorithms alone. There is also spirituality, even if it supposedly does not occur in today’s atheistic world. If society really believes that today’s crises and challenges can be overcome with AI and Big Tech still supports this dream with the argument that AI has no feelings and therefore cannot and will not act with evil intent, you are mistaken.
AI won’t find the best rules for the world. Systems like the German state will not be able to be controlled algorithmically. Meanwhile, AI exposes us to more risks than opportunities (see administration, education…). We are at a tipping point where AI can also do a lot of nonsense and damage. The hype surrounding AI is merely an immense advertising success, nothing else.
Technology (e.g. the car) has never made anything cheaper or at least I don’t know of any such case. How absurd it is that “CoPilot” can and will help me with anything. How absurd is it that the data in an electronic patient record can possibly increase the chances of a cancer being cured? It just makes me want to laugh.
Despite the global AI hype, I am still in a positive mood, because AI will not solve second-order predicate logic and that will burst the “bubble”. The presence of fake news is also a blessing, because the Internet will serve less and less as a source of information and a retreat to the analogue world will return directly to the source (again).
Prof. Dr Otte, thank you for sharing your insights on AI.
Thank you, Dr Caldarola, and I look forward to reading your upcoming interviews with recognised experts, delving even deeper into this fascinating topic.