Artificial Intelligence Not An Artificial Danger

In the last article, we discussed threats from outer space, including an asteroid collision. We can prepare for some of those events—if we care. Now, let’s get back to earth. What other existential risks does humanity face? Science fiction fans have been worrying about this one for many years.

In the 1977 movie, Demon Seed, a supercomputer is solving human problems even faster than we can cause them. We’re headed for utopia. But that scares world leaders, so they decide to shut down the computer. However, this is a pretty smart machine, so it figures out how to produce artificial sperm, and implants all it knows into the “seed.” Julie Christie becomes the mom, and at the end of the movie, we’re left to wonder how this super-genius half-android will deal with the world.

Yes, that’s fantasy, but artificial intelligence (AI) has come a long way, according to BuiltIn. Maybe too far, already, according to great minds. The story was also written in Forbes.

Last March, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceX founder Elon Musk issued a friendly warning: “Mark my words,” he said, billionaire casual in a furry-collared bomber jacket and days’ old scruff, “AI is far more dangerous than nukes. . .I am really quite close… to the cutting edge in AI, and it scares the hell out of me,” he told his SXSW audience. “It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.”. . .

A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AI’s impact could be cataclysmic unless its rapid development is strictly and ethically controlled. “Unless we learn how to prepare for, and avoid, the potential risks,” he explained, “AI could be the worst event in the history of our civilization.”. . .

Stuart Armstrong from the Future of Life Institute has spoken of AI as an “extinction risk” were it to go rogue. . . “If AI went bad, and 95 percent of humans were killed,” he said, “then the remaining five percent would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.”

That reminds me of an episode of the original Star Trek (“What Are Little Girls Made of?”), in which benign robots serve humans—until they realize their prime directive—survival. They know the weakness and stupidity of humans, and decide they must destroy the humans, to assure their own survival.

The article lists six risks of AI. One is automation-spurred job loss. That was the theme of Democratic presidential aspirant Andrew Yang’s campaign. He warned that machines will be taking over the jobs of humans, and his solution was to tax the productivity increase, and pay people $1,000 (a “Freedom Benefit”) out of that value, so they can find their own creative way to be productive.

The second is privacy violations.

In a February 2018 paper titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” 26 researchers . . .[wrote that AI] “could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns)

We’ve already seen the swift improvement of “deepfakes.” It used to be said that a wise person believes a quarter of what he hears and half of what he sees. Today, unless you trust the source, you can’t believe anything. And that goes for all political persuasions.

It’s just too easy to manipulate sight and sound in the digital world.

Without a program, such as that proposed by Andrew Yang, the chasm between rich and poor is likely to grow exponentially, as those who use AI will have an even greater advantage over those who have AI used against them.

And, finally, weapons automatization. We’ve already seen that someone sitting at a desk in North Carolina, or wherever, can kill hundreds or thousands of people on the other side of the world with the push of a button. But a human is still pushing the button. What will happen when smart computers analyze and act without human morals?

How scary is all this? NewScientist notes that there was a debate on the topic—between two robots!

Project Debater, a robot developed by IBM, spoke on both sides of the argument, with two human teammates for each side helping it out. Talking in a female American voice to a crowd at the University of Cambridge Union on Thursday evening, the AI gave each side’s opening statements, using arguments drawn from more than 1100 human submissions made ahead of time. . .

“AI can cause a lot of harm,” it said. “AI will not be able to make a decision that is the morally correct one, because morality is unique to humans. . .AI companies still have too little expertise on how to properly assess datasets and filter out bias,” it added. “AI will take human bias and will fixate it for generations.”

The robot arguing for the robots obviously didn’t get its audience.

AI would create new jobs in certain sectors and “bring a lot more efficiency to the workplace”. But then it made a point that was counter to its argument: “AI capabilities caring for patients or robots teaching schoolchildren – there is no longer a demand for humans in those fields either.”

Of course, so far, AI has been very helpful, according to the BBC.

The leading approach to AI right now is machine learning, in which programs are trained to pick out and respond to patterns in large amounts of data. . . software is being taught to diagnose cancer and eye disease from patient scans. Others are using machine learning to catch early signs of conditions such as heart disease and Alzheimers. . . being used to analyse vast amounts of molecular information looking for potential new drug candidates. . . also help us manage highly complex systems such as global shipping networks. . . As the technology advances, so too does the number of applications.

One problem with computers is—“garbage in, garbage out.” That is, the information a computer uses is only as good as the information it is given to analyze. As the joke goes, “to err is human—to cause total destruction, you need a computer.” And the problem is not just error. Humans have biases that we cannot even see, ourselves. So it will be impossible to input purely “objective” information. Starting from a flawed basis, and growing intelligence exponentially could make any small fault or whim into an unstoppable evil.

And that’s why the Great Minds of our time are warning us to be careful. Like the checks-and-balances that were once built into our Constitutional government (and largely, have been defeated since), AI will have to have more than an “off” switch. It will need a way to force the system to explain itself—how it came to that conclusion, what are the repercussions—and a way to reprogram, before the system decides that humans are too irrational to exist.


Goethe Behr

Goethe Behr is a Contributing Editor and Moderator at Election Central. He started out posting during the 2008 election, became more active during 2012, and very active in 2016. He has been a political junkie since the 1950s and enjoys adding a historical perspective.

Email Updates

Want the latest Election Central news delivered to your inbox?

Leave a Comment