Science

The Real Threat From A.I. Isn’t Superintelligence. It’s Gullibility.

by AliceKim posted Oct 12, 2022
?

Shortcut

PrevPrev Article

NextNext Article

ESCClose

Larger Font Smaller Font Up Down Go comment Print

Future Tense

The Real Threat From A.I. Isn’t Superintelligence. It’s Gullibility.

 

 20221012_173759.png

Possessed Photography/Unsplash

 

The rapid rise of artificial intelligence over the past few decades, from pipe dream to reality, has been staggering. A.I. programs have long been chess and Jeopardy! Champions, but they have also conquered poker, crossword puzzles, Go, and even protein folding. They power the social media, video, and search sites we all use daily, and very recently they have leaped into a realm previously thought unimaginable for computers: artistic creativity.

 

Given this meteoric ascent, it’s not surprising that there are continued warnings of a bleak Terminator-style future of humanity destroyed by superintelligent A.I.s that we unwittingly unleash upon ourselves. But when you look beyond the splashy headlines, you’ll see that the real danger isn’t how smart A.I.s are. It’s how mindless they are—and how delusional we tend to be about their so-called intelligence.

 

Last summer an engineer at Google claimed the company’s latest A.I. chatbot is a sentient being because … it told him so. This chatbot, similar to the one Facebook’s parent company recently released publicly, can indeed give you the impression you’re talking to a futuristic, conscious creature. But this is an illusion—it is merely a calculator that chooses words semi-randomly based on statistical patterns from the internet text it was trained on. It has no comprehension of the words it produces, nor does it have any thoughts or feelings. It’s just a fancier version of the autocomplete feature on your phone.

 

Chatbots have come a long way since early primitive attempts in the 1960s, but they are no closer to thinking for themselves than they were back then. There is zero chance a current A.I. chatbot will rebel in an act of free will—all they do is turn text prompts into probabilities and then turn these probabilities into words. Future versions of these A.I.s aren’t going to decide to exterminate the human race; they are going to kill people when we foolishly put them in positions of power that they are far too stupid to have—such as dispensing medical advice or running a suicide prevention hotline.

 

It’s been said that TikTok’s algorithm reads your mind. But it’s not reading your mind—it’s reading your data. TikTok finds users with similar viewing histories as you and selects videos for you that they’ve watched and interacted with favorably. It’s impressive, but it’s just statistics. Similarly, the A.I. systems used by Facebook and Instagram and Twitter don’t know what information is true, what posts are good for your mental health, what content helps democracy flourish—all they know is what you and others like you have done on the platform in the past and they use this data to predict what you’ll likely do there in the future.

 

Don’t worry about superintelligent A.I.s trying to enslave us; worry about ignorant and venal A.I.s designed to squeeze every penny of online ad revenue out of us.

 

And worry about police agencies that gullibly think A.I.s can anticipate crimes before they occur—when in reality all they do is perpetuate harmful stereotypes about minorities.

 

The reality is that no A.I. could ever harm us unless we explicitly provide it the opportunity to do so—yet we seem hellbent on putting unqualified A.I.s in powerful decision-making positions where they could do exactly that.

 

Part of why we ascribe far greater intelligence and autonomy to A.I.s than they merit is because their inner-workings are largely inscrutable. They involve lots of math, lots of computer code, and billions of parameters. This complexity blinds us, and our imagination fills in what we don’t see with more than is actually there.

 

In 1770, a chess playing robot—or “automaton,” in the parlance of the day—was created that for almost a century traveled the world and defeated many flabbergasted challengers, including notable individuals such as Napoleon and Benjamin Franklin. But it was eventually revealed to be a hoax: This was not some remarkable early form of A.I., it was just a contraption in which a human chess player could hide in a box and control a pair of mechanical arms. People so desperately wanted to see intelligence in a machine that for 84 years they overlooked the much more banal (and obvious, in hindsight) explanation: chicanery.

 

While our technology has progressed by leaps and bounds since the 18th century, our romantic attitude toward it has not. We still refuse to look inside the box, instead choosing to believe that magic in the form of superintelligence is occurring, or that it is just around the corner. This fanciful yearning distracts us from the genuine danger A.I. poses when we mistakenly think it is much smarter than it actually is. And if the past 250 years are any indication, this is the real danger that will persist into our future.

 

Just as people in the 18th and 19th centuries overlooked the banal truth behind the chess playing automaton, people today are overlooking a banal but effective way to protect our future selves from the risk of runaway A.I.s. We should expand A.I. literacy efforts to schools and the wider public so that people are less susceptible to the illusions of A.I. grandeur peddled by futurists and technology companies whose economic livelihood depends on convincing you that A.I. is far more capable than it really is.

 

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

 

[Source https://slate.com/technology/2022/10/artificial-intelligence-superintelligence-gullibility.html]