Is Artificial Intelligence Going to Kill Us All?

By    |   October 18, 2017, 5:55 pm EDT

Before we dive headlong into a discussion about the impending destruction of the human race at the hands of machines, let’s take a couple of steps back and firstly ask ourselves, do we even know what Artificial Intelligence really is?

Booz Allen Hamilton is Leading Fascinating AI Research

This topic and many others were covered at Booz Allen Hamilton’s event, “The Safety and Sanity of Machine Intelligence,” at Techweek DC on October 5th. Steven Mills, Director of Machine Intelligence at Booz Allen Hamilton, offered this: “the fundamental problem is that we don’t have a single definition of Machine Intelligence. We think about it as instilling machines with the ability to perceive, understand, reason and act in the service of some goal. Effectively, we’re building machines to carry out cognitive tasks that humans do.”

The AI Revolution Has Already Happened

Machine Intelligence, along with the fairly synonymous terms AI, Big Data, and Neural Networks have become buzzwords seemingly overnight. But in reality, AI has been around longer than you might think.

“The AI revolution isn’t now, it was 15 years ago, when Google was launched,” said panelist William A. Carter, Deputy Director and Fellow at the Technology Policy Program at The Center for Strategic & International Studies. His comment was a stark reminder that real life instances of AI look far more benign, and maybe even boring, than Hollywood depictions. Movies like iRobot or Ex Machina would have audiences believe that real AI is human-looking, but in reality, it’s just a bit (or a lot) of computer code.

Just Because Something is Smart, Doesn’t Mean It Has Motivation

We experience AI every day in the form of autocorrect on our phones, and that definitely hasn’t taken over anything except ridiculous meme pages and internet humor sites. Autopilot in airplanes, which has been around for decades, has also yet to commandeer a plane. The point is, AI is already integrated into our every day lives, and is helping, rather than harming mankind.

But that won’t do much for the worry-warts, who fear that what begins as help, ends up as servitude. What’s more likely is that machines will increasingly become our servants, allowing us to focus on pleasure and higher order tasks, says Dr. Amitai Etzioni, Director of the Institute for Communitarian Policy Studies at George Washington University’s Elliott School of International Affairs. That is clearly already taking place, as Google Maps helps guide us and other internet services “recommend” things to us based on our interests.

“The other thing we must ask,” he said at the panel, addressing the fear of machines taking over, “is why would AI want to take over the world? Just because someone or something is smart, doesn’t mean it has motivation.”

Making AI Explainable Helps Keep It Accountable

Still though, fear persists, largely because AI is so complex. “The idea of being able to explain what is going on is very important. The concept of explainable AI is a project at DARPA. Can you build an AI such that the system itself can explain its decisions and logic,” asks Kirk Borne, Principal Data Scientist at Booz Allen Hamilton.

And that leads to a final ethical question, posed by moderator Kim Hart, Technology Editor at Axios. “How do we prevent people’s unconscious bias from seeping into AI?” As AI makes more and more decisions for us, how can we ensure it’s fair? These questions will be at the heart of discussion as AI becomes further interwoven into the fabric of our lives.


Leave a Reply