People may never be able to fully understand how AI thinks, but transparency is still needed to build trust with users and optimize the productivity of these platforms.
How important is it to understand the technology we use? Practically everybody uses a computer on a daily basis, whether at work or at home, but relatively few of us know how that computer stores files or connects to the internet. The fact of the matter is that nearly all devices today operate at such a high level of complexity that understanding their inner workings is completely impractical. What really matters is that we produce results in partnership with that technology.
But artificial intelligence is different. A huge reason behind lagging adoption of the technology is the simple fact that it is misunderstood, engulfed by numerous myths and misconceptions that cause many people to worry that it will radically change their day-to-day work processes or even take their job.
Combatting those myths will take a degree of comprehension on the part of users, despite how complex the technology may be. Though AI may be too complex for anyone to fully grasp, even a rudimentary understanding will build trust with a skeptical public.
Can Humans Really Understand Machines?
A growing school of thinkers, including technologist David Weinberger and Facebook’s head AI scientist Yann LeCun, argue that understanding an AI’s thought process is not only unnecessary, but actually detrimental to the efficacy of the technology.
The main strength of AI, Weinberger argues, is that it makes decisions based on more variables than humans are capable of considering at any given time. Since the complexity, speed, and nuance of these decisions are beyond the scope of human understanding, rendering the technology understandable or explainable to the average user would require simplifying the process in such a way that would limit its efficacy.
Optimization Over Interpretation
But of course, this doesn’t mean that AI-enabled tools should be black boxes, completely illegible to their users. Though marketers may not be able to fully understand the technical details of how AI works, it’s clear the opacity of AI makes them worry about control — our recent customer survey revealed 25.5% of marketing AI users felt they had little control over their platform’s activity. Instead, measures of transparency should be introduced that allow users to at least understand the AI’s end goal in making a decision.
No technology is perfect, and at one point or another, any system will need to be calibrated. By giving their operators a certain degree of transparency into what inspires its decisions, AI algorithms can facilitate active collaboration with humans, which will only help build trust and dispel common myths about the technology. 58% of agencies responding to our survey said they discovered new audiences with their AI, but those kinds of insights can’t happen if marketers don’t trust the information they’re getting from their autonomous partners.
That’s the motivation behind tools like Inside Albert, which we created to grant users transparency into the world’s first marketing platform built from the ground up on AI. Instead of limiting the scope of the platform, Inside Albert gives marketers the information they need to calibrate and recalibrate the way they use Albert. Any understanding of how the platform works, even if it’s limited, gives professionals the opportunity to see AI as a tool that augments their work, rather than a threat that replaces it altogether.
Before AI adoption can really surge, some degree of transparency needs to be established. The real value of AI isn’t in its algorithm or its computing power — it’s in the relationship between that power and the human operator that understands how to leverage it. But that relationship has to be based on trust, and that trust has to be based in transparency.