Will robots dominate our world

Artificial intelligenceThe robots are getting closer

For a long time, artificial intelligence was a billion-dollar grave - research stood still for a long time, the euphoria of the early years dried up. But amazing things have happened in recent years: The speech recognition in our cell phones is inconceivable without AI. Automatic driving, automatic factories and automatic combat drones are all within reach. Scientists and programmers promptly warn of the dangers - is humanity abolishing itself? Martin Zeyn asks in his essay what artificial intelligence can really do today and why we fear it so much.

Martin Zeyn, born in 1964, is the head of the night studio at Bayerischer Rundfunk and lives in Munich. He has written over 20 radio essays on art, pop culture and philosophy. Lately he has mainly been concerned with the future of humans in a digital world.



A monkey house.

I stare at the faces of chimpanzees, gorillas, gibbons and orangutans, long, direct, impolite direct, shamelessly direct, as I would never do with humans. No matter whether they are romping around, scratching themselves, poking around in the food or just sitting quietly, they captivate me.

It's the resemblance that fascinates me. The gestures that I imitate almost automatically, like to check that I'm doing them the same way; Gestures that are strikingly similar, disturbingly different.

Of course I know what looks like my reflection is not human. But that is precisely what is fascinating: how something can be so similar to us without being alike ?! And how can something so similar to us appear so strange in the next moment?

Developers say humans get along better with robots when the machine creatures don't look like humans. We do not accept robots that are equipped with a human face and whose gestures seem stiff, cold, wrong. Obviously we are strangers when they look like us, when they imitate us, when they pretend. And we notice that.

A classic science fiction motif: Many short stories, for example by Philip K. Dick, revolve around this topic, the horror of the similarity, the frightening moment in which we are in front of us, that speaks like us, that behaves like us, have to realize that it's just a robot, an Android from the Nexus 6 series.

Man or machine?

In some scenarios, Dick linked this with a threat - so if we can no longer distinguish the robots from us, they will destroy us. But then we don't really have to ask: who is destroying whom? For what distinguishes people from what they have created themselves in their own image - from a human work? The much-cited Turing test claims that a computer has artificial intelligence when people can no longer differentiate whether an answer comes from a machine or another person.

Computers are already imitating us today. Robots assemble our cars and do their best to turn the handle, open the door and let another robot pass. Artificial intelligences answer our questions to the voice assistants on our cell phones or tirelessly give us translation suggestions. They can't do what we can - not everything. But they can do some things better than their creators. A future in which we are no longer the only intelligent beings on earth seems to be foreseeable. Don't we need a test that determines with absolute certainty what distinguishes a person from a machine? Skills such as memory, empathy or irony - and how do we measure them, where is the threshold to determine what is entirely human and what is only artificial? When is a person a person?

And when only a worse machine? So a being with an expiration date. A species overrun by evolution. No future without Kassandra. The biochemist and science fiction writer Isaac Asimov made laws for robots 60 years ago. The famous is: No robot is allowed to kill a person. The less known demands: No artificial intelligence may live together with developed biological beings on one planet. The essence of both laws: It is a command of common sense to keep the apparatus at a distance. Shortly before his death, the physicist Stephen Hawking warned that computers would overtake us in 100 years; and Elon Musk, head of the electric car manufacturer Tesla, repeatedly invokes the destructive power of AI, artificial intelligence, which could soon decide to launch a preventive strike without a human adviser.

Robots rule intelligently

Autonomous driving, a development focus of Tesla, is only not possible thanks to the ability to recognize patterns through AI. A car that does not think for itself when another crosses our path should not be allowed to drive alone.

But that is exactly the question: does it think? Does the car think? Does the Artificial Intelligence that controls it really think? Can it decide which of two possible tax maneuvers is more ethically responsible? Dodging a deer, even if the maneuver endangers the driver? Would you rather steer the sliding car to a kindergarten group than to a retirees' trip? Or is it the other way around? What are the parameters there? An average life costs around one million euros if we take the added value through work as a calculation variable. Then the autonomous car would have to evade the limousine of a company driver and instead steer into the group of pensioners.

I don't remember being asked these questions in my driving test. It would also be new to me that such scenarios are practiced in ADAC driver training. But why do we ask these questions of a car that relieves us of the burden of steering? Do we fear it will make a bad, amoral decision?

Or worse, are we afraid it may make a better decision?

Let's look at the current world situation: environmental destruction, wars, hunger, injustice. Perhaps the domination of intelligent robots wouldn't be a terrible scenario at all?

Of course, private transport would be abolished. Because it makes no sense to use so many resources just to get person A to point B.

Of course, the production and possession of weapons would be banned. Because artificial intelligence cannot be conveyed with the deeply human deterrent formula that one has to intimidate an opponent in advance with the threat of killing him so that he does not get the idea of ​​killing you himself.

Fixed-edge knives were still allowed because the benefits outweigh the risks.

Of course, all drugs including wine, beer and cigarettes would be banned. Because too many people abuse them or the health care costs are disproportionate to the benefits.

Coffee and tea are still being examined, the data situation is not clear enough here.

Anyone with a bit of brains would have to praise artificial intelligence for making these decisions.

Anyone who still has some dignity must fear the puritanical fury that lies in such considerations. There are good reasons to ban drinkable alcohol, but with wine it would also wipe out an eight thousand year old cultural achievement. Worse still, we would be degraded to objects that the AI ​​does not trust to individually weigh up the benefits and harms. We would be slaves to a good cause.

Whereby, it would be interesting to see whether the human ingenuity would not find amazing, unpredictable ways and means to fool artificial intelligence on the non-existent nose.

Human action cannot be foreseen

Of course, programs can develop scenarios to create a desirable future for people.

Of course, these scenarios will only become reality if we work together to ensure that people do not prevent what is good for people.

Whereby the artist Hito Steyerl - inventor of the very clever formula that in addition to artificial intelligence there is also artificial stupidity - Hito Steyerl has drawn attention to how partial we are when it comes to predictions. The residents of the London high-rise Grenfell Tower had repeatedly pointed out that the risk of fire was enormous. The fears came true, 71 people died. Why was their prediction not listened to, asks Steyerl.

Not because it was as obscure and ambiguous as that of the Delphi Oracle. No, because human predictions, especially those made by people with no social standing, do not have the same rank as predictions made by computers.

But people are not so bad as oracles. What the computer scientist Julia Dressel found out. She examined the program "COMPAS", which is used in the USA to calculate the likelihood of offenders recidivism. If the prognosis is positive, the courts in some states will grant an earlier release from prison.

Dressel provided the information available to the program - circumstances, place of residence, race - to 20 randomly selected people. This group with no special legal or sociological knowledge achieved the same hit rate as the program: 65 percent.

What surprised the computer scientist herself: If the number of parameters is increased, i.e. more reasons are used as to why someone has relapsed in the past, this does not lead to an improvement in the result. Calculating more parameters did not increase the number of hits. It stayed at about 65 percent accuracy. Which also means: every third prediction was wrong. With people and machines.

However, since there are only two options - someone relapses or not - the hit rate based on a purely random selection would be 50 percent. This means that the evaluation of data increases the accuracy, but also not to an outstanding extent.

There seems to be a natural limit to accuracy. Human behavior is unpredictable. Which is not surprising to statisticians. It is relatively easy to compare my purchases with those of other customers who have bought the same or similar things. And if a retailer promises to deliver something before we have ordered it, in practice that often only means that the company will deliver us the expansion of the Call of Duty computer game or the recently published part of the Neapolitan saga by Elena Ferrante, if we can have already bought a product in the range. It is more difficult when a large number of influences can determine our actions. Then no unequivocal predictions can be made.

Business firms, however, love predictions, because they give the board of directors the framework for how they should behave in the future. The new shouldn't catch you off guard. But it is easier than dare to try something new, to improve something old that you know and to offer it a little cheaper. Computer programs that, for example, adjust a milling cutter minimally until it achieves an optimal result are just as innovative.

Computers aren't curious

The new has only a good reputation, but few real friends. So the new shouldn't be too new. Quite a few executives seem almost to fear the new - its all-dissolving power - they fear that they are not prepared. And thus fail to recognize a potential of their employees, yes of all people: to create things anew. In a recently rediscovered lecture, the philosopher Hannah Arendt meditated on the revolution in the USA - she was particularly concerned with the question of why the founding fathers rebelled against British rule. What distinguished this revolution was the interaction of two feelings, namely "to be free and to start something new". According to Arendt, the invention of modern democracy was only possible because of this. It succeeded because the new tasted like freedom.

All men are equal - all people are equal. A revolutionary formula. At the same time - and this is a brilliant thought by Arendt - the founding fathers simply overlooked around a fifth of the population: the slaves. But just because they did so, because they did not realize this implication, the slave owners passed a constitution that bore the seeds of overcoming slavery.

Any program would have pointed out the obvious flaw. Blacks are people too. Then it could not be true that slaves are not people. Consequently, no one is allowed to enslave them. A simple syllogism that can be used to test the logic of an assumption. But - as must be remembered - there is also a logic of slavery, precisely when the assumption is true that blacks are not the same.

Would a computer program have prevented the American Revolution by pointing out the inherent contradiction between equality and slavery? Maybe.

What Arendt points out: The founding fathers designed a state that was not implemented immediately, but which was at least conceivable from now on. People can create something new - which certainly needs a lead-up, but is not just an extension of this past. Not being predictable can be a weakness as well as a possibility. Or as Arendt put it poetically atypically for her: "We can start something because we are beginners and thus beginners."

And machines, can they be as revolutionary as humans? At this point we can reassure anyone who is afraid of artificial intelligence. Jürgen Schmidhuber - according to the "New York Times" the "Godfather of AI", but journalists are not exactly petty when it comes to declaring idols to be gods - Schmidhuber admits that it is very, very difficult to instill curiosity in computers. In fact, what is now referred to as AI, artificial intelligence, is actually just machine learning. Which means: like a teacher, we ask the machine toddler a question and teach him to answer this one question. Perhaps that is precisely why AI research is currently successful - apart from the rain of money from Silicon Valley - because the scientists have become more humble. The limits of what can be achieved are clearly stated. It's not about creating awareness, it's about gradual improvements: machines are only just beginning to learn, to learn from their mistakes.

And how difficult it is, what a lifelong task, we humans know from painful experience. What programs can already do better today: They do not regard errors as failures, but as a step towards a better solution. They can because they are not emotionally involved. They don't despair, don't tire, they don't get angry. You are focused and play through all the possibilities until a solution is found. However, this is not entirely alien to us humans either. The Israeli army has started an interesting project: It has had autistic people who are gifted on the island evaluated satellite photos, i.e. people who can barely understand the facial expressions on faces, but can stare at two images for hours until they have discovered all the differences between them. They looked at the images until they found the slightest change, until they discovered a possible threat.

Activities like these will one day be taken over by programs because they are beyond the ability of the vast majority of people to focus on one tiny problem on a tremendous amount of data. And because the number of autistic people who can do this is limited. And of course there is more than just this manageable area in which robots are better than humans or where their use protects humans. So where we like to let them work for us: for example when handling dangerous goods, toxic paints, radioactive materials. Or with many nursing activities that are either stupid like checking the number of tablets in the medicine cup or that strain the back like turning bedridden patients.

Robots create time for people

I worked in a nursing home for 18 months. Growing up on a farm, I thought: I know what hard work is. I should be wrong.

One of the few night shifts I did was on New Years Eve. When the thunderbolts and rockets exploded en masse, a 90-year-old wandered through the room and shouted: "The bombers are coming."

I didn't know what to do.

I wasn't bad at developing roles. I was a nephew, grandson, hunter, friend of the man, I played along with everything, added everything that the faded memories of hardly recognizable scenarios dictated. In return, the patients presented me with alertness, with stories, with joy.

But here I didn't know what to do next. Out of an inexplicable impulse I took the woman by the hand, who immediately clenched tightly around mine, did not let go of her, all night I walked with her on this New Year's Eve, torn by explosions, talking as much as I could and so calmly I could.

Not only would I have liked to have had a robot with me that night that would have relieved me of turning the stroke patient every four hours, handing out the drinking cup that was already on the table next door - I needed help badly because if I had hers Hand had to solve, the fear got worse. Your panic needed my hand.

I have thought of giving up work several times. I had to see a doctor because of back problems, two or three times I harshly dealt with the elderly because there was so much to do - which I am ashamed of to this day. I suspected that this work was beyond my strength. A robot would have helped. So that there would have been more time to listen. Time because I just didn't have a solution. But at least I still had one hand. And people sometimes need a hand more than a solution or a glass of water.

We have to train programs

There is no either / or, also and especially in the debate about artificial intelligence. If we clearly define the tasks, then we can construct robots that buy us humans time so that we can give patients their dignity. But if we let military technologists take control of things, then we get combat drones that independently decide which person will be dead in the next moment.

I wouldn't care if I was killed by a human's command or by a program's algorithms alone. As long as we, oh-so-sensible people, do not take it so seriously with the observance of the fifth commandment, as long as we should exercise a certain generosity when it comes to the ethical inexperience of AI.

But there are other inexperiences that worry us. The poet Paul Valéry once wrote: "The deepest thing about people is the skin." It cannot be foreseen whether a non-biological being will ever understand this sentence. The depth of the skin can be measured, between 1.5 and four millimeters. How is it supposed to be the deepest part of people? Nevertheless, for every lover who is touched by the object of his desire, the correctness of this sentence is out of the question.

It is unclear whether neural networks will ever do more than simulate human thinking. Some people say no, for theological reasons, which is not surprising, but also for scientific reasons. The renowned linguist John Searle, for example, points out that an algorithm only works if all calculation steps are carried out one after the other. Of course we humans do that too - but not only. Many great discoveries are based on ingenious ideas, on sudden inspirations, on our ability to discover a rule by chance. Searle calls this our ability to think semantically, that is, our ability to fill something with meaning.

Much of what we recklessly call artificial intelligence is still based on the ability to find correlations in the midst of gigantic collections of data. But the programs are not yet able to recognize whether there is a real causality or just a correlation. A correlation is many births in spring and the simultaneous arrival of the storks, a causality, even small children know that, it is not.

Nobody knows whether artificial intelligence, or whether a deep neural network, for example, will one day be able to do that: determine a reason. Neural networks imitate our brain. There, neurons forward an impulse faster if it occurs several times. Put simply, we learn from experience. AI counts all incoming data and what occurs more frequently is established as a pattern. The program then searches for this pattern - and if it finds more hits, this algorithm becomes the norm. That means we have to train these programs, we have to teach them what to delete, what to add, what threshold value is appropriate to distinguish a series of coincidences from a rule. That means we teach the machine how to analyze data in a meaningful way. We are not yet further. The AI ​​is still learning to learn.

Nobody knows whether our brain really works that way or whether that already describes our ability to learn in its entirety. The proponents of the thesis that intelligence functions independently of the carrier medium, i.e. independent of the brain, everywhere, have become somewhat quieter.

Except for the manageable group of transhumanists who are driven by the somewhat vain-looking hope that their scanned consciousness could live on forever in the networks. It is therefore no coincidence that some transhumanists are demanding recognition as a religion. Sometimes the problem is not the level of sophistication of AI, but the euphoria of its human advocates. It is not the fault of the machines that we overestimate their capabilities.

But the programs are already superior to us in one point: you ask. What sounds harmless, even banal, is a great weakness of the people. We stop asking. We don't ask enough. The blatant, at the same time invisible, racism of the American founding fathers is an example. Another, more modern one makes it even clearer. In 1974 the physician Harald zur Hausen published a report on the role of viruses in the development of cervical cancer. The researcher was initially harshly criticized for contradicting the doctrine of the time that cancer is not caused by viruses. The main difference: Vaccinations can be developed against viruses. What was finally tried - with success. Because a scientist contradicted the prevailing opinion, a vaccine was finally available in 2006 that immunizes against two types of cancer-causing viruses. Not only can computers learn from people, we can learn from them too. We can learn to see the questions that arise from data analysis. We can learn to question. We think we know. Artificial intelligence, on the other hand, is a follower of Sophocles: it knows that it can only know something because it acts as if it did not know anything.

No war between man and machine

So we too should learn to learn again - and rather leave the learning by heart, which dominates the curriculum, to the machines. The playing, the curious, the poetic, the creative person has survived in a constantly changing environment. There are insects that are perfectly adjusted to a host plant. We are not. As a biological species, we are generalists who survive even chaotic situations. Because we are able to grasp the new without knowing it.

And that's why nobody has to worry about us disappearing. The future intelligence, to which we no longer or hardly notice the artificial, will not lock us in a human park because it has deciphered its code and we have become boring. An intelligence that is implanted with curiosity and openness will appreciate people who repeatedly give you tasks that you have to nibble on. Since the mathematician Kurt Gödel, we have known that sufficiently complex systems cannot be free of contradictions, that is, they cannot be completely proven from within themselves. Put another way, systems create a reason for themselves. Our social life is based on assumptions: love, dignity, intelligence. We have created wonderful things with it: music, languages, word games, wine, counting rhymes, freedom - achievements that an Artificial Intelligence, if it is not stupid, will appreciate.

So there is no threat of a war between two intelligent species, no "survival of the fittest", no extinction of humanity. No new super-race is displacing us like Homo sapiens from Homo neanderthalensis, and modern man from Neanderthals. Coevolution is also conceivable, a side-by-side relationship, an appreciation of those abilities that the other person does not have to the same extent. We fear the robots because they threaten to become too similar to us. It's human, but it doesn't have to be that way. In the fourth part of the "Research", Marcel Proust raises the question of whether we don't actually love what is not the same as us. We are also drawn to differences.

We don't have to love the robots right away. It's more about humility. For millennia we looked down on barbarians, primitive peoples and savages. We look down on people we don't see as equals. Even more so to beings who, because they are animals, are treated like cattle by us. Orcas talk, suckler cows suffer when farmers take their newborn calves away from them, keas, parrots from New Zealand, learn from the successful behavior of their conspecifics, matabele ants drag wounded conspecifics back into the burrow to care for them. We're not as unique as we think we are.

Darwin's theory of evolution was ridiculed by an unnamed draftsman caricaturing him as a monkey with a human head. But the artist was more correct than he knew: Chimpanzees use tools, use sex as a means of social interaction and, yes, fight other groups when they invade their territory. They are already very close to us.

Close enough that I stare at them in the zoo and find their gestures fascinating.

Charles Darwin suggested that the difference between humans and apes is of degree, not of principle. The difference between artificial and human intelligence may also be only of a degree. It may be that imitating humans will make the machines appear very human. And then? Then the question is how we will look at each other: As opponents? Or as a fascination? The robots, the neural networks, the artificial intelligence - they are getting closer. If we're not scared of them, we have a good chance of learning from each other.