Blog

03/16
2018

Microsoft Calls for New Regulations Around AI

03/16/2018

Microsoft Calls for New Regulations Around AI

In a new ebook, Microsoft surprised many by calling on the government to increase its oversight of the application and development of artificial intelligence.

For the past two decades, tech companies have been the biggest movers and shakers in the U.S. economy, disrupting old business paradigms and innovating new ones. Companies like Facebook, Amazon, and Google have ambitions to move beyond the screen by reaching into huge and valuable markets like healthcare and finance. Like any major corporation with an unquenchable thirst for growth, these companies typically aren’t big fans of regulation.

But interestingly enough, a recently released Microsoft ebook vocalized the company’s support for precisely that. Throughout the 149 pages of The Future Computed: Artificial Intelligence and its Role in Society, the book’s authors Brad Smith and Harry Shum — the company’s President and Executive VP of AI and Research, respectively — offer a convincing prediction for how AI will affect society as a whole in the years to come. In turn, Smith and Shum also offer a definitive call for increased regulation on AI and its application, a space that the company clearly plans on dominating.

That begs the question, why would Microsoft ask to be wrapped up in red tape?

Changing Context

While this demand may at first seem perplexing, recent changes in how society and the government view the latest technology helps contextualize Microsoft’s position. As a recent article in The Economist details, public opinion of Silicon Valley is no longer as overwhelmingly positive as it once was, and government regulators and trustbusters are turning a critical eye towards Big Data. In light of these trends, Microsoft’s call for regulation can be seen as a preemptive measure to make sure that if and when the rules are written, they’re written on Microsoft’s terms.

The authors are careful to delineate a narrow framework for policy. They insist, for example, that while regulation is needed, it shouldn’t be developed too soon, since the specific needs of the technology are still emerging. Since too much red tape too soon could strangle new developments, they call for a slow implementation of AI policy, rolled out over five years with heavy input from technology companies.

Trust in Technology

Microsoft’s demand for regulation boils down to a key factor: trust. Thanks to years of dystopian science-fiction movies like 2001: A Space Odyssey and The Matrix, there is widespread skepticism of AI amongst the general public. This lack of trust amounts to more than just existential unease — in an economy that is already starting to depend on AI components, it also affects productivity.

At Albert™, our belief is that trust in AI is something to be earned, and that means a commitment to transparency. If marketers weren’t given visibility into how our autonomous AI-based platform works, they’d be (understandably) far less willing to trust and collaborate with it. That’s why we introduced Inside Albert, which offers marketers a view into the inner workings of Albert’s mind and makes it easier for them to tweak his parameters to optimize campaign results.

For Microsoft, the call for regulation serves exactly the same purpose. By publicly advocating on behalf of consumers, Microsoft has already started to establish itself as a trustworthy leader in the AI space. Since artificial intelligence promises to change so many aspects of modern life, that trust will go a long way in the years to come.

by Oren Langberg
Lead Sales Engineer