Elon Musk, and Bill Gates Warn About Artificial Intelligence

Stephen Hawking, Elon Musk, and Bill Gates Warn About
Artificial Intelligence
By Michael Sainato • 08/19/15 12:30pm
Stephen Hawking, Elon Musk, and Bill Gates. (Photo: Getty Images)
Some of the most popular sci-fi movies—2001: A Space Odyssey, The Terminator, The
Matrix, Transcendence, Ex Machina, and many others—have been based on the notion that
artificial intelligence will evolve to a point at which humanity will not be able to control its
own creations, leading to the demise of our entire civilization. This fear of rapid technology
growth and our increasing dependence on it is certainly warranted, given the capabilities of
current machines built for military purposes.
Already, technology has had a significant impact on warfare since the Iraq war began in
2001. Unmanned drones provide sustained surveillance and swift attacks on targets, and
small robots are used to disarm improvised explosive devices. The military is
currently funding research to produce more autonomous and self-aware robots to diminish
the need for human soldiers to risk their lives. Founder of Boston Dynamics, Marc Raiber,
released a video showing a terrifying six-foot tall, 320-lb. humanoid robot named Atlas,
running freely in the woods. The company, which was bought by Google in 2013 and
receives grant money from the Department of Defense, is working on developing an even
more agile version.
The inherent dangers of such powerful technology have inspired several leaders in the
scientific community to voice concerns about Artificial Intelligence.
“Success in creating AI would be the biggest event in human history,” wrote Stephen
Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might
also be the last, unless we learn how to avoid the risks. In the near term, world militaries
are considering autonomous-weapon systems that can choose and eliminate targets.”
Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow
biological evolution, couldn’t compete and would be superseded by A.I.”
The technology described by Mr. Hawking has already begun in several forms, ranging from
U.S. scientists using computers to input algorithms predicting the military strategies of
Islamic extremists to companies such as Boston Dynamics that have built successfully
mobile robots, steadily improving upon each prototype they create.
Mr. Hawking recently joined Elon Musk, Steve Wozniak, and hundreds of others in issuing a
letter unveiled at the International Joint Conference last month in Buenos Aires, Argentina.
The letter warns that artificial intelligence can potentially be more dangerous than nuclear
The ethical dilemma of bestowing moral responsibilities on robots calls for rigorous safety
and preventative measures that are fail-safe, or the threats are too significant to risk.
Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a
2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m
increasingly inclined to think that there should be some regulatory oversight, maybe at the
national and international level, just to make sure that we don’t do something very foolish.”
Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as
a means to “just keep an eye on what’s going on with artificial intelligence. I think there is
potentially a dangerous outcome there.”
Microsoft co-founder Bill Gates has also expressed concerns about Artificial
Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the
camp that is concerned about super intelligence. First the machines will do a lot of jobs for
us and not be super intelligent. That should be positive if we manage it well. A few decades
after that though the intelligence is strong enough to be a concern. I agree with Elon Musk
and some others on this and don’t understand why some people are not concerned.”
The threats enumerated by Hawking, Musk, and Gates are real and worthy of our
immediate attention, despite the immense benefits artificial intelligence can potentially
bring to humanity. As robot technology increases steadily toward the advancements
necessary to facilitate widespread implementation, it is becoming clear that robots are
going to be in situations that pose a number of courses of action. The ethical dilemma of
bestowing moral responsibilities on robots calls for rigorous safety and preventative
measures that are fail-safe, or the threats are too significant to risk.

Don't use plagiarized sources. Get Your Custom Essay on
Elon Musk, and Bill Gates Warn About Artificial Intelligence
Just from $13/Page
Order Essay
Still stressed from student homework?
Get quality assistance from academic writers!
error: Content is protected !!
Open chat
Need assignment help? You can contact our live agent via WhatsApp using +1 718 717 2861

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 30% with the discount code LOVE