When water cooler conversation turns to movies and lands on The Matrix, what scene first comes to mind? Is it when the film’s hero-in-waiting, Neo, gains self-awareness and frees himself from the machines? Or Agent Smith’s speech that compares humans to a “virus?” Or maybe the vision of a future ruled by machines? It’s all pretty scary stuff.

Although it featured a compelling plot, The Matrix wasn’t the first time we’d explored the idea of technology gone rogue. In fact, worries about the rise of the machines began to surface well before modern digital computers.

The rapid advance of technology made possible by the Industrial Revolution set off the initial alarm bells. Samuel Butler’s 1863 essay, “Darwin among the Machines,” speculated that advanced machines might pose a danger to humanity, predicting that “the time will come when the machines will hold the real supremacy.” Since then, many writers, philosophers and even tech leaders have debated what might become of us if machines wake up.

What causes many people the most anxiety is this: We don’t know exactly when machines might cross that intelligence threshold, and once we do, it could be too late. As the late British mathematician I. J. Good wrote, designing a machine of significant intelligence to improve on itself would create an “intelligence explosion” equivalent to, as he put it, allowing the genie out of the bottle. Helpfully, he also argued that because a super intelligent machine can self-improve, it’s the last invention we’ll ever need to make. So that’s a plus — right?

There are other perspectives on the matter, and they aren’t all dystopian.

Jerry Overton, a DXC Technology Fellow and leader of the company’s Applied Artificial Intelligence Center of Excellence, is someone who should know. He says that fears of a machine-led revolution are largely overblown. “People are concerned about that because they watch way too many TV shows and too many movies,” Overton says.

Like technological advances that came before, artificial intelligence (AI) won’t create new existential problems, Overton says. It will, however, offer us new and powerful ways to make mistakes. And Overton believes it’s smart to take some preventive measures. “When you build a highway, you put those grooves on the side, and they warn you when you’re starting to drift off. [We need to do the] same thing in AI … to put in alerts that tell you when the machine is starting to learn things that are outside [of] your ethical boundaries,” he says.

Jerry has other ideas, too. What else can we do to benefit from AI while building a positive future? Hear what he has to say.

Also see Jerry’s AI predictions for 2019-2021.

Jerry OvertonJerry Overton is a data scientist in DXC Technology’s Analytics group and a DXC Fellow. He is the global lead for DXC’s Applied Artificial Intelligence Center of Excellence. An author, inventor, and instructor, Jerry blogs at Doing Data Science, where he shares his experiences leading artificial intelligence (AI) innovation. Jerry is vice chair of The Scott-Morgan Foundation, whose mission is to promote the ethical use of AI and robotics to enhance people’s lives and society.