Artificial Intelligence Marketing Blog
May
18
2017

by Nir Huberman
Head of Product

How Machine Learning Enforces Brand Safety

Share Button

With bad placements forcing companies to withdraw ad-buys from platforms such as YouTube, market leaders are now turning to machine learning for a solution.

As internet connectivity and smartphones continue to proliferate across the globe, the volume of content accessible to the average person has increased exponentially. From a corporate perspective, this ballooning of available information represents a significant increase in the number and diversity of advertising channels available, but these opportunities are not without their attendant risks.

One need only look at the recent ad placement controversies in the UK to understand why marketers remain hesitant to commit substantial resources to ad-buys in emerging media. Over the last several months, organizations as varied as The Guardian, Johnson & Johnson, Wal-Mart, McDonald’s, Vodafone, and Audi have put holds on ad spending on YouTube — and in some cases, on all Google entities – over concerns that their ads may be placed alongside material produced by terrorist sympathizers, far-right ethno-nationalists, or any number of other peddlers of violent or lewd hate speech.

If Google fails to rehabilitate these relationships, it risks losing a predicted $750 million of revenue, a sum constituting less than 1% of its projected annual sales, but a cause for concern and harbinger of larger issues for the brand nevertheless. The challenge Google must surmount revolves around pinpointing — and, ideally, preventing — bad ad placements in a landscape characterized by an overwhelming amount of data and content.

According to Chief Business Officer Philipp Schindler, “When [Google] spoke with many of [its] top brand advertisers, it was clear that the videos they had flagged received less than 1/1,000th of a percent of the advertisers’ total impressions.” In other words, for Google, avoiding additional placement controversies entails locating a handful of poorly-placed ads accounting for a fraction of its clients’ portfolios — the programmatic equivalent of finding a needle in a haystack.

It would be a mistake, however, to assume that a few bad placements can’t do serious damage. Thanks to the ubiquity of social media, all it takes is a single negative ad impression on the wrong consumer for the story of an “ad campaign gone wrong” to go viral. In our hyper-connected world, a PR fiasco is always but one social media influencer away, meaning brushing off even 1/1,000th of a percent of total ad impressions amounts to taking a very real risk.

The Power of Machine Learning in a Big Data World

But how should Google manage an advertising environment in which even pulling two billion bad ads, removing 100,000 publishers from the AdSense program, and prohibiting ads from being served on over 300 million YouTube videos — all of which Google did in 2016 — isn’t enough to assuage its advertisers’ brand safety concerns? The answer may rest with machine learning.

As Quantcast’s EMEA Managing Director Matt White points out, executing a comprehensive brand safety strategy involves “so much data — most of it noise — that you need pattern recognition and machine learning to pick out signals and give those signals relevancy and value.” In a world driven by big data, humans are no longer capable of processing the amount of raw information that must be considered when making advertising decisions.

Google appears to understand this. In response to ongoing advertiser frustration, Schindler explained, “We’ll be hiring significant numbers of people and developing new tools powered by our latest advancements in AI and machine learning to increase our capacity to review questionable content for advertising.” The goal is to create AI sophisticated enough to discern the nuances of what, exactly, makes a video objectionable. Google already relies on machine learning to operate features such as video recommendations on YouTube, but enabling this technology to actively monitor the 400 hours of video uploaded to the platform every minute is no small task.

That being said, Google has taken tangible steps toward an AI solution for bad ad placement, even going so far as to develop its own AI hardware called TensorFlow, a proprietary Tensor Processing Unit (TPU). Recently, the company offered a $30,000 prize to anyone who could use the Google Cloud Platform and TPU to accurately assess, categorize, and flag problematic YouTube videos.

AI Marketing Platforms Already Protect Brand Safety

None of this technology is perfect yet, but all indicators point to machine learning as the best solution for brand safety problems in big data environments where there simply isn’t enough brain capacity or human capital to get the job done manually. In the interim, there are already excellent AI-based marketing and advertising tools capable of managing a company’s ad-buys and minimizing the risk of controversial ad placements.

With AI marketing platforms like Albert™, companies are able to specify what types and categories of media — such as news sites, gaming sites, or specific YouTube channels — they do and do not want to purchase from Google (or other ad space providers) prior to launching an ad campaign.

Not only does Albert™ allow marketing teams to target particular audiences with unprecedented precision, it also protects brand safety as strongly as currently possible by learning what kinds of channels and media your company frequently blacklists and applying these patterns to future automated ad-buys. No advertising endeavor is 100% safe, but with the machine learning capabilities of AI platforms like Albert™, your brand is in good hands.