Blog

05/24
2018

HBR Points to Albert as Example of Transparency in AI

05/24/2018

HBR Points to Albert as Example of Transparency in AI

A recent piece in the Harvard Business Review cites Inside Albert as a prime example of all the benefits that transparency can bring to AI.

How important is trust in AI? A recent article in the Harvard Business Review argues that it’s not only essential to getting buy-in from top stakeholders, but to unlocking the technology’s full potential. Humans need transparency from their AI vendors before they can collaborate in good faith with their digital partners, and it’s only through this collaboration that artificial intelligence will reach a level of performance that can match the hype.

The author, Brad Power, argues that this level of transparency isn’t some distant fantasy — in fact, it’s already on the market. Power points to a new feature from Albert™ — what we call Inside Albert — that offers insight into how the program works and makes decisions as an example of the kind of accountability that AI partners need to demonstrate to earn users’ confidence. By opening up the black box, Inside Albert builds that crucial trust with the user, which in turn allows for more informed and efficient use of the tool.

Why Trust Takes Transparency

Today, most professionals remain wary of AI. You can always expect some resistance to new and disruptive technologies, but when you’re trying to integrate that technology into your day-to-day operations, distrust is only going to result in lost productivity. As a McKinsey report explains, adoption of the technology is lagging across industries and sectors, which is likely due to a reluctance on the part of executives to embrace a technology that could frustrate their employees.

Power argues that skepticism surrounding AI comes down to four factors: the hype surrounding it, a lack of transparency, fear of losing control over work, and fear that it will disrupt day-to-day work processes. Even though these worries are largely unfounded, they are still perfectly reasonable concerns for humans to have. While managers can demand that their employees use AI, they can’t demand that they trust it — and without that trust, the technology will never bring companies the results they’re hoping for.

How Inside Albert Earns Your Trust

The only way to cut through the fog of the unknown is by offering transparency to users — that is, by giving them detailed summaries of what the AI is doing, and eventually, why it’s doing what it’s doing. Inside Albert is the first step in our march towards full AI transparency. While anyone can use Albert without a deep, intricate understanding of how he works, this capability offers users insight into what’s going on underneath the hood.

When Albert takes a campaign in a direction that a marketer doesn’t understand, the marketer won’t panic and rush to override his decision. Instead, she can trace his reasoning through the many decisions that led to this new campaign direction, then decide whether or not to reset course. Or, as Power puts it, “Inside Albert let[s] marketers better understand how the system was making decisions, so they ultimately [don’t] feel the need to micromanage it.”

It’s undeniable at this point that AI will become increasingly essential to digital marketing in the coming months and years, as well as to countless other fields of work. But before that can happen, AI developers will need to put more of a premium on transparency. That’s not because AI is fundamentally untrustworthy, but because employees need to understand those decisions before they can trust them.

by Oren Langberg
Lead Sales Engineer