As AI becomes more commonly adopted across industries, it’s clear that the technology can make us more efficient and effective at our jobs. But can it also be a force for moral good?
It’s tempting to think of AI as an analytical tool that uses unbiased data to draw unbiased conclusions in order to make 100% unbiased decisions based on those conclusions. But this idea assumes that the data and material that AI learns from isn’t affected by the same troubling prejudices that guide human decisions — and there’s mounting evidence that suggests human bias is more than capable of working its way into AI technology.
In a recent study published in Science Magazine, researchers found AI that learns from human-written text can adopt the same stereotypes and biases that exist in humans. As Joanna Bryson, co-author of the study and a computer scientist at Princeton University says, “Don’t think that AI is some fairy godmother. AI is just an extension of our existing culture.”
Uncovering Bias in Computers
Bryson and her colleagues developed a word-embedding association test (WEAT) to determine if certain biases that can be found in a psychological test called the implicit association test (IAT), where subjects’ reaction to certain words is said to reveal subconscious associations.
The WEAT was designed to detect the associations between words that a machine learning algorithm develops through analysis of human-created input. The results were not very promising. Another Science article explained that “embeddings for names like ‘Brett’ and ‘Allison’ were more similar to those for positive words like love and laughter, and those for names like ‘Alonzo’ and ‘Shaniqua’ were more similar to negative words like ‘cancer’ and ‘failure.’ For the computer, bias was baked into the words.”
Programming Against Our Weaknesses
That said, AI still does have the potential to improve our decision-making — very much so, in fact. But we must first work to rid our AI platforms of the characteristics we deem to be flaws in our own judgements.
The solution to that challenge isn’t in the data, but in the humans that program AI. The recognition of implicit biases has already enabled programmers to eliminate some biased AI decision-making. The company Textio uses artificial intelligence to create more effective job listings, but their technology has also resulted in increased diversity — an average of 23% more women are interviewed by companies that use the software.
With results like these, it’s clear that programmers can overcome implicit biases that might otherwise seep into our autonomous decisioning software. AI can be tweaked to work more effectively and promote diversity in key applications, provided that its human programmers and administrators are shrewd enough to observe how bias occurs.
AI Works for Moral Good
Further research suggest that AI’s ability to reward positive behavior in creative ways will not only make us more efficient, but also better people. An AI scheduling company called x.ai uses its technology to reward people who show up on time to meetings by autonomously preventing meetings with people who often cancel and reschedule from happening anywhere but the host’s office so as to avoid wasted time and travel. We can also start seeing AI improve our everyday behaviors, from autonomous cars that prevent drivers from making illegal maneuvers, to systems that monitor our work-life balance and warn against burnout.
For the moment, we’re continuing to see the positive effects of AI on our businesses. While AI’s ability to make us better people remains to be seen, the ability of marketing platforms like Albert™, the world’s first completely AI-based marketing platform, to make us better at our jobs is already becoming obvious.