What I Think About the Singularity

“The Singularity” means different things to different science-minded people, but the definition most familiar to me is something like “the point at which artificial intelligence surpasses human intelligence and takes over.”

And I say that’ll be the day. If the “intelligence” of computers were that impressive, surely some of it would have trickled down to the technology we’re all stuck with every day, that can’t respond to the ambiguity in almost all human interactions.

But then, maybe it isn’t intelligence that will be taking over. Maybe computers will succeed in negating so much of our intelligence that humankind will function like a cheap toy robot and nobody will mind. Look how much of our common sense we have already had to abandon to computers as we try to get a question answered or a mistake corrected or an appointment changed.

We humans were doing fairly well in the old days, with reading and writing and math, tools and machines and ideas. Brains and talent were the abilities we prized. We expected our exchanges to make sense. We expected questions to be understood and answered and for the answers to match the questions, because human brains were in charge and that’s just how they work. We were making a lot of mistakes but we were building something; we were learning.

But when technology advanced to the point where computers with their speed and memory capacity could be programmed to do many things faster and better than humans can do them, they began to seem like more than machines. Innovators’ respect for them went wild, and to make the most of what computers could do, they willingly ignored the fact that the devices couldn’t understand a word anyone said, let alone any nuances of anyone’s words, unless they had been specifically programmed to react (not respond) to them. We—all of us—have now had to make accommodations to those limits. The work world changed, human interaction and its satisfactions were largely phased out along with many human habits and skills, and a new set of standards and priorities took precedence—along with new generations. It’s no wonder that, in some minds, as the possibilities progressed, the idea of a superhuman kind of creature—sort of an Artificial Intelligence monster— began to haunt imaginations. Would it take over the world?

Of course we don’t need to worry about computers being smarter than we are. Computers can be taught to do all kinds of things, but whatever they’re doing, it isn’t thinking. They weren’t ready to run things and they never will be. A computer can’t really answer a question, understand a train of thought, or make a decision; a computer has no intelligence. All it can do is elaborate on its own human-generated abilities, making them more and more complicated but never providing a new insight. A puppy has more talent. Computers lack interest, intuition, empathy, and maybe most revealing of all, a sense of humor—all components of human intelligence.

I know it’s a different world now, but I’m homesick for the one we’ve lost. I just wish we could find a way to slow down, continue to develop our own brains, learn somehow to manage our conflicts, and face what it will take to keep us from destroying ourselves.