Too much technology is killing us

Artificial intelligence: will it one day kill us?

Jay Tuck

Artificial intelligence lives in the big wide world of big data. Only with their help can we use the endless sea of ​​information. A single person, even an entire army, would be completely overwhelmed. AI can do that. And soon it will be able to do even more. According to its inventors, it will in the near future be a thousand times smarter than all of humanity ... welcome to the Google scale.

Artificial intelligence is software that updates itself. It deals with masses of data and programs that are beyond our human imagination. It does not stubbornly follow the given algorithms of its programmers. She is capable of learning, independent, and she also does things that we do not understand.

“Artificial intelligence can become the greatest achievement of mankind. Unfortunately, it can also be the last. "

Stephen Hawking, astrophysicist

Intelligent systems that continue to develop on their own are imperceptibly pushing themselves ever deeper into tasks that were previously reserved for top human beings. More and more objects are being equipped with AI in smart homes and smart cities. Global financial transactions are being shifted from the trading floor to data centers, where automated algorithms handle deals worth billions in microseconds.

Whether it's buying a flight ticket, reserving a hotel room or ordering from Amazon, prices and conditions are adjusted to the market every minute - completely self-sufficient and often in a way that not even the top managers involved can understand. Every day we transfer more and more responsibility for urban planning and energy supply, in management and in medicine to artificial intelligence, even in warfare.

When artificial intelligence reaches a critical mass and is able to write its own software at high speed, it will multiply explosively. Small cores with adaptive intelligence will network, core by core, to form decentralized mainframes. In the Internet of Things, they are networked with one another in an opaque way. They will collect data, borrow performance and exchange software. Like drops of mercury on a glass plate, they will find each other - and merge with each other.

After that, their growth will be explosive.

AI will be worldwide.

On the order of Google.

And in an emergency there is no plug to pull out.

Evolution Without Us: Will Artificial Intelligence Kill Us?

In his current book Evolution without Us (19.99 euros) Jay Tuck summarizes the results of a two-year exclusive research with German drone pilots and US armaments planners, NATO military strategists and AI researchers.

Darwin's darling

But what happens if we disregard Charles Darwin's laws of nature and create an intelligent being that is far superior to us? What if we are no longer Darwin's Darling?

It is known that AI occasionally gets out of hand. For example, the developers of a Microsoft chat program called TAY learned that their AI was doing unpredictable things. TAY was supposed to be Microsoft's answer to Apple's SIRI and Google's ALLO - polite, educated and always up to date. Only a few answers were programmed into it beforehand. TAY was controlled by artificial intelligence and should learn independently.

But TAY ran amok.

Shortly after its launch in February of this year, TAY suddenly started spreading racist slurs across the Twitter universe. TAY spread genocide slogans and wildest conspiracy theories. The community was shocked. And Microsoft had a problem. Nobody could explain where the racist attacks came from. "We had to take TAY offline and make adjustments," said a spokesman. Somebody had taught the Microsoft machine bad things. The software had picked them up.

Experts weren't surprised. Learning software should learn. And that is not always controllable.

And yet, artificial intelligence is being given more and more responsibility - even for heavy weapons. AI combat robots have long played an important role in the military. Every third vehicle in the US armed forces is now an intelligent machine. The well-known drones in Germany and the USA are remote-controlled and out of date.

Reading tip: Google promises not to build any AI weapon systems

EnlargeJay Tuck with killer drone, Holloman Air Force Base, USA

The next generation will fly entire operations alone - including landing on an aircraft carrier. The X-47b Pegasus, a delta wing aircraft with the appearance of a UFO, has a range of 4,000 kilometers and a payload of an estimated 2,000 kilograms. On the way, she makes all the decisions herself. Exception: the so-called "kill decision". This is reserved by law to a human operator.


The final threshold for artificial intelligence is set to fall soon. You want to leave the decision between life and death to the machines. In official strategy papers, Pentagon planners have announced robotic weapons that kill self-sufficiently. The aim is "complete independence from human decisions", as it is called in an official army document. In the Navy documents, scenarios are considered in which "unmanned submarine drones track down, track, identify and destroy the enemy - all fully automatically."

EnlargeX-47b Pegasus, a delta wing with the appearance of a UFO
© US Department of Defense

The machines are not ready yet. They still make mistakes. During maneuvers in Iraq in 2007, an intelligent robot (type SWORD) suddenly aimed its 5.56 mm machine gun at its own troops. Only the courageous intervention of a soldier could prevent a bloodbath at the last second. The SWORD combat robot was classified as unsafe and the field operation was canceled.

The incident was a wake up call.

“Artificial intelligence is the greatest existential threat to humanity. We conjure up the devil. "

Elon Musk, Tesla

Artificial intelligence, learning software - that means nothing else than that software writes its own updates. In the process, she learns things that cannot be foreseen and does things that we cannot understand. Often their own developers cannot even decipher the code that the self-learning software wrote.

In the course of time, many thought leaders in the IT world fear that artificial intelligence can completely free itself from human influence in this way. The only question is: what does she do then?

Many believe it could kill us.

Some think it will kill us.

By now, many of Silicon Valley's most renowned thinkers are seriously concerned. Men like Elon Musk and Bill Gates, Peter Thiel and Stephen Hawking are convinced that artificial intelligence can become an existential threat to us. In the near future, it might be able to wipe out all of humanity.

To this day, our society has not understood the damage that big data has caused. Nobody saw them coming. It grew at an exponential rate - from kilobyte floppies to megabyte floppy disks to gigabyte sticks to terabyte hard drives - in increases of a thousandfold.