Blog

07/24
2018

What Google’s Canceled Pentagon Contract Says About AI and Morality

07/24/2018

What Google’s Canceled Pentagon Contract Says About AI and Morality

Google has decided against renewing its contract with the Department of Defense after employees expressed concerns about the ethical implications of militarized AI.

In a wide-ranging 2016 discussion with WIRED Editor-in-Chief Scott Dadich and President Obama, MIT Media Lab Director Joi Ito observed, “What’s important is to find the people who want to use AI for good — communities and leaders — and figure out how to help them use it.”

President Obama echoed Ito’s sentiment, but pointed out that it remains difficult to delineate between “good” and “bad” deployments of AI. “There’s no doubt that developing international norms, protocols, and verification mechanisms around cybersecurity generally, and AI in particular, [are] in [their] infancy,” he argued. “Part of what makes this an interesting problem is that the line between offense and defense is pretty blurred.”

Over the course of the last six months, this linear drama has been thrust into the public sphere, as Google has attempted to navigate the controversy generated by its work with the US Department of Defense (DoD).

Supercharging DoD Analyses with AI

In an April 2017 memorandum, the DoD announced the establishment of the Algorithmic Warfare Cross-Functional Team (AWCFT), or Project Maven. According to the memo, “The AWCFT’s objective is to turn the enormous volume of data available to [the] DoD into actionable intelligence and insights at speed.”

Project Maven was conceived primarily as a way to integrate “advanced computer vision” into the analyses of video footage collected by military drones like the ScanEagle, MQ-1C Gray Eagle, and MQ-9 Reaper.

“A single drone…produces many terabytes of data every day,” writes Air Force Lieutenant General Jack Shanahan in an article published by the Bulletin of the Atomic Scientists in December 2017. “Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.”

By training Project Maven’s underlying algorithms with hundreds of thousands of human-labeled images, the DoD managed to deploy Project Maven in the military conflict against the Islamic State just eight months after the initiative was announced.

This “frankly incredible success” notwithstanding, Lt. Gen. Shanahan is well aware that the continued militarization of AI will inevitably raise a number of questions. “As US military and intelligence agencies implement modern AI technology across a much more diverse set of missions, they will face wrenching strategic, ethical, and legal challenges.”

Resistance from Within

According to a vocal minority of Google employees, however, Project Maven’s “narrow focus” is an insufficient safeguard against improper, unethical deployments of AI. Although many of the details were kept under wraps, Google quietly signed an 18-month contract with the DoD late in the summer of 2017.

After news of the tech giant’s military entanglement leaked this past March, a Google spokesperson told Gizmodo, “This specific project [Project Maven] is a pilot program with the Department of Defense to provide open source TensorFlow APIs that can assist in object recognition on unclassified data. The technology flags images for human review, and is for non-offensive uses only.”

This reassurance proved inadequate for some. In early April, several thousand Google employees — roughly 3% of the company’s workforce — submitted a letter to CEO Sundar Pichai asking that Google immediately withdraw its support from Project Maven and “draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

After a month of corporate inaction, around a dozen Google employees resigned in protest in mid-May. This seems to have been the straw that broke the camel’s back, as on June 1, the Washington Post reported that Google has decided against renewing its DoD contract when it expires next March.

A Commitment to Transparency

Like Ito, we at Albert™ believe that AI should be used as a force for good. But as the Google/DoD saga makes clear, defining “the good” — as it pertains to AI or in any other context — is anything but easy.

Ultimately, trust in AI is something that must be earned, and for us, that means a commitment to transparency. That’s why we recently introduced Inside Albert, a new feature that offers marketers a window into the inner workings of our product, making it easier for users to adjust operational parameters and optimize campaign results. Innovation and ethical uncertainty are inextricably tied, but we recognize the added responsibilities incumbent upon us as pioneers of AI in the marketing space.

by Oren Langberg
Lead Sales Engineer