Should we be afraid of AI? The military wants a real answer by the end of 2015

The Military’s New Year’s Resolution for Artificial Intelligence

In November, Undersecretary of Defense Frank Kendall quietly issued a memo to the Defense Science Board that could go on to play a role in history.

defense-large for blog on socializing AIThe calls for a new study that  would “identify the science, engineering, and policy problems that must be solved to permit greater operational use of autonomy across all war-fighting domains…Emphasis will be given to exploration of the bounds-both technological and social-that limit the use of autonomy across a wide range of military operations. The study will ask questions such as: What activities cannot today be performed autonomously? When is human intervention required? What limits the use of autonomy? How might we overcome those limits and expand the use of autonomy in the near term as well as over the next 2 decades?”

A Defense Department official very close to the effort framed the request more simply. “We want a real roadmap for autonomy” he told Defense One. What does that mean, and how would a “real roadmap” influence decision-making in the years ahead? One outcome of the Defense Science Board 2015 Summer Study on Autonomy, assuming the results are eventually made public, is that the report’s findings could refute or confirm some of our worst fears about the future of artificial intelligence.

Source: Defense One

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail