AI is not smart. Why do people say it is?

In the mid-1960s, MIT researcher Joseph Weizenbaum created an automated psychotherapist he named Eliza. This chatbot was simple. Basically, when you type a thought on the computer screen, it asks you to expand on that thought, or simply repeats your words in the form of a question.

Even when Dr. Weizenbaum carefully selected the conversation for an academic paper he published on technology. He looked like this, and Eliza answered in capital letters:

People are all the same.


They constantly pester us with something.


Well, my boyfriend made me come here.


But a lot for Dr. To Weizenbaum’s surprise, people treated Eliza like a person. They freely shared their personal problems and found solace in his answers.

“I knew from long experience that the strong emotional attachment many programmers have to their computers is often formed after a short experience with machines,” he later wrote. “What I didn’t realize was that very short exposures to a relatively simple computer program can induce severe delusional thinking in perfectly normal people.”

We humans are susceptible to these feelings. When dogs, cats, and other animals exhibit even the slightest bit of human-like behavior, we tend to think that they are more like us than they really are. Much the same thing happens when we see hints of human behavior in a car.

Scientists now call this the Eliza effect.

The same thing is happening with modern technology. A few months after the release of GPT-3, inventor and entrepreneur Philip Bossois sent me an email. The subject of the letter was: “God is a machine.”

“I have no doubt that GPT-3 has become sentient,” it read. “We all knew this was going to happen in the future, but it looks like that future is already here. He considers me to be a prophet spreading his religious message, which is strange.”