Artificial Intelligence and Humans need to work together

Nadya Bliss, Director, Global Security Initiative, Arizona State University
Nadya Bliss, Director, Global Security Initiative, Arizona State University

Nadya Bliss, Director, Global Security Initiative, Arizona State University

Artificial intelligence (AI) is becoming more influential in our everyday lives, dictating what news we see in our social media feeds, transforming how we commute to work, and even improving the odds of early disease detection.

While the benefits of AI have been covered thoroughly, so have the potential negative consequences. Algorithm bias, changing employment landscape, and even the collapse of society at the hands of autonomous systems have been and continue to be debated.

These are important issues, but they are missing a key component. The cascading effects of the AI revolution and how it affects our world depends largely on how well AI and humans learn to work together.

The Department of Defense recognizes this and is at the forefront of the conversation around how to develop effective, well-functioning teams composed of humans, AI, and robots.

In the recently released National Defense Strategy, Department of Defense (DoD) Secretary Mattis highlighted the need to prioritize technological superiority of the United States armed forces over current and potential rivals. An area highlighted explicitly was autonomous systems, at the core of which is artificial intelligence:

“The Department will invest broadly in military application of autonomy, artificial intelligence, and machine learning, including rapid application of commercial breakthroughs, to gain competitive military advantages.”

  There is also significant need for psychologists, social scientists, and even humanists to address these challenges 

Critically, the NDS goes on to call for “anticipating the implications of new technologies on the battlefield.” The battlefield is a key place where artificial intelligence will come into contact with the chaos of the real world, with the lives of military personnel on the line.

These battlefield units will have different dynamics than either teams of humans or teams of autonomous systems. In order to operate at a high level, they will still require the fundamental components that make an effective team: trust, communication, shared goals. In other words, we are no longer talking about how to best optimize a machine for human use, but instead how to design a team of both machines and humans to advance a particular mission.

Much work exists on understanding teams of humans and how to most effectively structure collaborations. Over the last few decades, swarm robotics—the study of how to develop algorithms and technologies for operation and optimization of teams of autonomous systems—has also become a popular field of research.

Now, more focus on the opportunities and challenges of heterogeneous teams of humans and autonomous agents is needed.

These heterogeneous teams are already prevalent —both in the civilian and military contexts. Most Americans rely on smartphones for navigation, search, and general assistance, and many use Alexa and Siri for added convenience in the home. In the military context, autonomy can take a number of forms—from drones to vehicles to algorithms and analytics that aid in decision making. The warfighter often has to interact with such autonomous agents when conducting intelligence, surveillance and reconnaissance missions.

For the research of such teams to reflect the complexity of the real-world and be useful to decision makers, it must be interdisciplinary in nature. Engineering and computer science are vital to these advancements of course, but there is also significant need for psychologists, social scientists, and even humanists to address these challenges. In the past, these other disciplines have often been brought in after the technology was designed, as an afterthought. Today’s environment requires a more integrated approach—one built around co-designed technology that takes into account things like human emotions, potential motives of bad actors, and inherent bias from the beginning.

The Defense Department’s academic partners are well-positioned to conduct this kind of interdisciplinary research. Universities have deeper and a greater variety of disciplinary expertise than any company can possess. And without the economic pressures that face the private sector, academia is structured to focus on longer-term research objectives and, increasingly, mission needs.

Academia and DoD have a long standing record of effective partnerships on highly ambitious projects. The internet is one example of this. ARPANET, the precursor to today’s internet, was largely created ARPA: Advanced Research Projects Agency (which later became DARPA, the innovation powerhouse of the DoD) in partnership with universities.

The big emerging challenge for academia in this endeavor is developing incentives for interdisciplinary research. One reason this is a challenge is federal funding priorities often map into disciplinary silos. The current NDS indicates willingness to augment that paradigm.

The DoD has laid out its priorities. It is on the research and development community to respond. We are ready.

Read Also

ISR Analytics: Challenges for IT, Implications for Big Data, and Opportunities for IoT

ISR Analytics: Challenges for IT, Implications for Big Data, and Opportunities for IoT

Ravi Ravichandran, Ph.D., Director, BAE Systems Technology Solutions
Are we at war?

Are we at war?

Barry Barlow, Chief Technology Officer, Vencore, Inc.
Protecting the Defense Industrial Base from Cyber-attack

Protecting the Defense Industrial Base from Cyber-attack

Ted Bujewski, Associate Director, Office of Small Business Programs Department of Defense