Thanks to decades of AI villains in the movies, and technology leaders who tend to hyperbolize the dangers of AI, some fear that artificial intelligence is a threat to humanity — but is it?

The news is a steady stream of new things to live in terror of, and artificial intelligence is certainly something we’ve been told to fear. Just take the movies, for example: Ultron from The Avengers, the machines in The Matrix and in the Terminator movies, HAL 9000 from 2001: A Space Odyssey, and many more. In all of these films, a man-made machine somehow gains consciousness and decides that man is an enemy that must be eliminated.    

Speaking at a technology conference in Texas this March, Elon Musk of Tesla and SpaceX stated, “I think [AI] is the single biggest existential crisis that we face and the most pressing one.” Musk also stated, “…Mark my words, AI is far more dangerous than nukes,” and called for regulatory oversight for artificial intelligence, comparing AI development to “summoning a demon.”

British inventor Clive Sinclair said he believes AI will doom mankind, according to The Washington Post, and Microsoft’s Bill Gates mused aloud, “I don’t understand why some people are not concerned.”

Clearly, we are meant to be concerned that robotic beings of some sort or another will one day rule the world. 

Before we start filling in that second number on mankind’s tombstone, though, we should question if the threat is really that dire. In fact, is artificial intelligence as portrayed in movies and science fiction — a rational, sentient, thinking, feeling consciousness capable of contriving evil on its own — even a real possibility? 

I don’t think so, and neither do most of the industry experts I spoke with for this issue’s cover story, “Get Smart About AI in Security,” on page 52.

Stephen Smith, manager of IT services, D/A Central, Oak Park, Mich., discussing weaponized AI systems that suddenly decide that all humans are the enemy, said, “That technology is awfully fanciful and pie-in-the-sky and doesn’t have legs on the ground yet.”

Travis Deyle, cofounder and CEO, Cobalt Robotics, San Mateo, Calif., said regarding the conversations he has had with luminaries in the AI field, “Today we are at the equivalent of cavemen playing with fire, and what you’re trying to do is like describing to them a nuclear weapon.” 

Deyle said we don’t even have the vocabulary to really understand or discuss general AI (the type of AI that has human-like consciousness) in any meaningful way. While we can create systems that execute very narrow operations that seem to be on a level with or beyond human ability, such as recognizing objects in video or playing chess, Deyle said creating a machine that’s as capable as even a toddler is not conceivable because toddlers do so many things that are not yet reproducible.

He explained that while we can make a machine capable of recognizing something is “weird,” we haven’t created a machine that knows what “weird” is yet.

If all that is true, then is it even worth moving forward with AI research and development?

Unequivocally, yes. The practical benefits of what we call “AI” in security — even though it may not be what is typically conceived of in science fiction — is incalculably valuable and is already proving useful in countless practical ways. 

As with so many areas of technological development, humans are most creative when inspired by the wonders of creation — and what higher inspiration can there be than that of the example of a reasoning being with superior mental development and the power of articulate speech? 

Whether humans are ever able to develop something as sophisticated as a toddler or not, the seeming impossibility of the task reminds me of Shakespeare’s observation in Hamlet: “What a piece of work is a man! How noble in reason, how infinite in faculty!”