Rise of the Machines

3/9/16
 
   < < Go Back
 
from Legatus Magazine,
3/1/16:

Is artificial intelligence a grave threat to humanity?

Movies about killer robots make millions at the box-office every year. Blockbusters like Blade Runner (1982) and the Terminator series — plus newer films like Ex Machina and I, Robot — have thrilled and frightened millions of moviegoers over the years.

While these films can be very entertaining, some notable leaders — like Stephen Hawking and Bill Gates — believe the plots of these films are plausible.

Hawking, the famous theoretical physicist and cosmologist, told the BBC in 2014: “The development of full artificial intelligence could spell the end of the human race.” Gates, co-founder of Microsoft, told Reddit in an interview last year: “I am in the camp that is concerned about super intelligence.”

Catholics around the world rightly question whether there really is something to worry about. But some on the non-technical side are hard-pressed to define AI.

“AI is a human amplifier,” said Robert Panoff, a computational physicist and executive director the Shodor Foundation. “It’s a human telling a computer to look for patterns that maybe a human would not have thought of. The computer learns in ways it was told to learn.”

Examples of AI include IBM’s “Watson,” a computer system that beat Jeopardy champions in 2011. Other examples include language translation programs and voice recognition programs like Apple’s “Siri” and the Amazon Echo, a voice command device answering to the name “Alexa.”

As lifelike as these programs seem, there are several areas, however, where human intelligence and artificial intelligence differ greatly.

“Humans are much more creative,” Panoff told Legatus magazine. “Computers cannot process certain visual information.”

A essential aspect of the ethical debate swirling around artificial intelligence centers on the question: How do you program a machine to act and think like a human being?

“Remember, we have not defined what it means to be human yet, let alone a robot,” said Eugene Gan, professor of media technology, communication, and fine arts at Franciscan University of Steubenville. “What does intelligence mean? How do we program a robot to paint a beautiful panting? How do you program a robot to comfort a child?”

Panoff doesn’t believe that the earth will have killer robots any more than what already exists. “What we have to fear is humans giving control of human decisions to a computer without a stop gap.”

Gan said we shouldn’t fear AI, but rather the human beings creating it.

Don Howard, a professor of philosophy at Notre Dame University and former director of the Reilly Center for Science, Technology and Values, disagrees with Hawking and Gates.

“This kind of doomsday scenario is just not realistic,” Howard said. “The worst thing is that they are drawing attention away from real issues with AI.”

The biggest single problem with artificial intelligence, he said, will be the job losses of human beings to machines.

The area of AI that worries many people is that of autonomous weapons systems — where targets are chosen and destroyed without any human involvement.

“Israel has something called the Iron Dome,” Howard said. “It is autonomous. It identifies a missile and launches a counter missile in seconds. Great Britain has something called Brimstone. This system has the capacity to identify a vehicle and see if it’s a tank or passenger vehicle and fire. But how can you be 100% sure of your target?”

The United Nations met to discuss autonomous weapons in Geneva twice last year. The next meeting takes place in April.

The Church and science

Gan wrote about the last seven decades of Church teaching with regard to technology in his 2010 book Infinite Bandwidth: Encountering Christ in the Media.

“The first thing to know is that the Church has always been in favor of technology and has written about it since 1936,” he said. “The Church teaches that technology can be very good, but it must be at the service of man.”

When scientists speak of intelligence, they are not considering the gift of grace which enlightens the intellect, or the reality of the soul. Human intelligence includes experience, memory, wisdom and even concupiscence,” he said.

Father Tad Pacholczyk, director of education at the National Catholic Bioethics Center, says that ultimately any new technology — like AI — can be used for good or for evil.

“The problem is not with the technology itself, but with the various agendas that are likely to dictate its subsequent use — and the flawed or morally corrupt human beings who oftentimes seem to end up making those particular decisions,” he said.

More From Legatus Magazine: