07/26
2018

Are Marketers Prepared to Make the Most of AI?

07/26/2018

Are Marketers Prepared to Make the Most of AI?

AI-based tools have the potential to revolutionize critical marketing tasks like content personalization, but only if marketers learn to use them properly.

While much of the early conversation about commercial applications of AI has been shaped by concerns about automation-driven job loss, the consensus is that these concerns are largely unfounded. McKinsey reports that fewer than 5% of jobs could be fully automated using current technology. Meanwhile, Gartner Research Director Manjunath Bhat declares that “robots are not here to take away our jobs, they’re here to give us a promotion.”

As countless experts have made clear, AI tools are not designed to replace human workers, but to augment them. This is especially true in fields like marketing, where creativity and complex data analysis are in constant conversation with one another.

Despite this clear alignment of interests, a recent Conductor survey found that a significant portion of marketers (34%) rank AI as the 2018 industry trend for which they feel most unprepared. Much of this uncertainty stems from marketers’ lack of clarity about what an AI tool can and cannot do, and how adopting one will affect the particulars of their day-to-day work.

AIs Need Onboarding Too!

The most important thing for a marketer to understand is that an AI tool must be trained before it can really contribute to an organization’s marketing efforts. Most AI marketing tools are powered by machine learning algorithms — strings of code that aren’t programmed to execute an explicit series of commands, but rather, “learn” from the datasets they’re provided.

It’s therefore a marketer’s responsibility to feed their AI tool not just large quantities of data, but large quantities of high-quality data — especially during the early stages of integration. This can be a big ask for less data-literate marketers (over half of marketers admit to being overwhelmed by the amount of data in their marketing stack), but the best AI tools are designed to be incredibly user-friendly.

For instance, with Albert™, the world’s first fully-autonomous AI marketing platform, marketers are able to create a straightforward rule set that shapes how the platform learns and adjusts in the future. Albert takes established KPIs like overall budget, daily spending limits, and frequency caps and develops a nuanced understanding of when and where to cut off and/or redirect spending for various campaigns to maximize results while remaining within those boundaries.

Revolutionizing Personalization in Marketing

Once a machine learning algorithm has been properly trained, personalization is arguably the first task marketers should delegate to their AI tool. Many modern consumers demand highly tailored ad experiences, but just over one in ten marketers are either “very” or “extremely” satisfied with their current level of personalization.

Traditionally, marketers have approached digital personalization in a way that reinforces existing silos within their department. If a marketer is crafting a campaign for Facebook, for example, they typically rely on lookalike audiences assembled by the social media giant’s internal teams. Tracking the quality of this lookalike audience requires a great deal of intensive cross-referencing between datasets, making it effectively impossible to accomplish in real time.

An AI tool, however, can seamlessly process immense volumes of data drawn from any number of marketplaces in a matter of minutes. This not only delivers truly 1:1 experiences for all targets in cross-channel campaigns, but helps break down long-standing silos within a marketing team, as well.

A New Way Forward

When it comes to AI’s potential impact on marketing, these improved personalization capabilities are just the tip of the iceberg. Tools like Albert make marketers better at their craft and free them up to work with their human colleagues on higher-level, more creative projects and processes — provided that these marketers train their AI partners with the proper care.

As von Hollen concludes, when an organization effectively integrates AI into its operations, “what you get is a completely new working environment.” At least in marketing, this environment is not only new, but indisputably better.

by Noa Segall
Head of Search & Display Data Research

07/24
2018

What Google’s Canceled Pentagon Contract Says About AI and Morality

07/24/2018

What Google’s Canceled Pentagon Contract Says About AI and Morality

Google has decided against renewing its contract with the Department of Defense after employees expressed concerns about the ethical implications of militarized AI.

In a wide-ranging 2016 discussion with WIRED Editor-in-Chief Scott Dadich and President Obama, MIT Media Lab Director Joi Ito observed, “What’s important is to find the people who want to use AI for good — communities and leaders — and figure out how to help them use it.”

President Obama echoed Ito’s sentiment, but pointed out that it remains difficult to delineate between “good” and “bad” deployments of AI. “There’s no doubt that developing international norms, protocols, and verification mechanisms around cybersecurity generally, and AI in particular, [are] in [their] infancy,” he argued. “Part of what makes this an interesting problem is that the line between offense and defense is pretty blurred.”

Over the course of the last six months, this linear drama has been thrust into the public sphere, as Google has attempted to navigate the controversy generated by its work with the US Department of Defense (DoD).

Supercharging DoD Analyses with AI

In an April 2017 memorandum, the DoD announced the establishment of the Algorithmic Warfare Cross-Functional Team (AWCFT), or Project Maven. According to the memo, “The AWCFT’s objective is to turn the enormous volume of data available to [the] DoD into actionable intelligence and insights at speed.”

Project Maven was conceived primarily as a way to integrate “advanced computer vision” into the analyses of video footage collected by military drones like the ScanEagle, MQ-1C Gray Eagle, and MQ-9 Reaper.

“A single drone…produces many terabytes of data every day,” writes Air Force Lieutenant General Jack Shanahan in an article published by the Bulletin of the Atomic Scientists in December 2017. “Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.”

By training Project Maven’s underlying algorithms with hundreds of thousands of human-labeled images, the DoD managed to deploy Project Maven in the military conflict against the Islamic State just eight months after the initiative was announced.

This “frankly incredible success” notwithstanding, Lt. Gen. Shanahan is well aware that the continued militarization of AI will inevitably raise a number of questions. “As US military and intelligence agencies implement modern AI technology across a much more diverse set of missions, they will face wrenching strategic, ethical, and legal challenges.”

Resistance from Within

According to a vocal minority of Google employees, however, Project Maven’s “narrow focus” is an insufficient safeguard against improper, unethical deployments of AI. Although many of the details were kept under wraps, Google quietly signed an 18-month contract with the DoD late in the summer of 2017.

After news of the tech giant’s military entanglement leaked this past March, a Google spokesperson told Gizmodo, “This specific project [Project Maven] is a pilot program with the Department of Defense to provide open source TensorFlow APIs that can assist in object recognition on unclassified data. The technology flags images for human review, and is for non-offensive uses only.”

This reassurance proved inadequate for some. In early April, several thousand Google employees — roughly 3% of the company’s workforce — submitted a letter to CEO Sundar Pichai asking that Google immediately withdraw its support from Project Maven and “draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

After a month of corporate inaction, around a dozen Google employees resigned in protest in mid-May. This seems to have been the straw that broke the camel’s back, as on June 1, the Washington Post reported that Google has decided against renewing its DoD contract when it expires next March.

A Commitment to Transparency

Like Ito, we at Albert™ believe that AI should be used as a force for good. But as the Google/DoD saga makes clear, defining “the good” — as it pertains to AI or in any other context — is anything but easy.

Ultimately, trust in AI is something that must be earned, and for us, that means a commitment to transparency. That’s why we recently introduced Inside Albert, a new feature that offers marketers a window into the inner workings of our product, making it easier for users to adjust operational parameters and optimize campaign results. Innovation and ethical uncertainty are inextricably tied, but we recognize the added responsibilities incumbent upon us as pioneers of AI in the marketing space.

by Oren Langberg
Lead Sales Engineer

07/20
2018

UK Government Releases Framework for Development of Ethical AI

07/20/2018

UK Government Releases Framework for Development of Ethical AI

A recent report issued by the House of Lords lays the groundwork for a regulatory regime governing artificial intelligence in the UK.

On April 16, the House of Lords Select Committee on Artificial Intelligence released a comprehensive report outlining the future of AI in the UK. Entitled AI in the UK: Ready, Willing, and Able?, the report grapples with the ongoing disconnect between technological and legal changes in the AI space.

“AI is not without risks, and the adoption of the principles proposed by the Committee will help mitigate these,” says Committee Chairman Lord Tim Clement-Jones. “An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

Without prescribing a specific regulatory regime, the report provides a broad ethical framework for AI development. This framework is comprised of five overarching principles:

AI should be developed for the common good and benefit of humanity;
AI should operate on principles of intelligibility and fairness;
AI should not be used to diminish the data rights or privacy of individuals, families, or communities;
All citizens have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside AI; and
The autonomous power to hurt, destroy, or deceive human beings should never be vested in AI.

Grounded in the synthesis of extensive expert testimony, these principles are intended to both shape the future of AI in the UK and act as a corrective to a number of early AI “misuses.”

Building on GDPR

Apropos of any discussion about AI, the Committee dedicates a great many pages to data concepts both old (open data, data protection legislation, etc.) and new (data portability, data trusts, etc.).

For instance, despite its forthcoming “Brexit” from the European Union, the UK has committed itself to domestic enforcement of the EU’s recently implemented General Data Protection Regulation (GDPR). As the Committee points out, this means that many organizations will be required “to provide a user with their personal data in a structured, commonly used, and machine-readable form, free of charge.” GDPR stipulations like these are not uniquely applicable to AI, but moving forward, they will have a significant impact on the development of AI-driven technologies both in the UK and abroad.

Counteracting Hardcoded Bias

On a more AI-specific level, the Committee writes at length about one of AI’s most persistent ethical dilemmas: bias. For example, the UK is home to a number of companies offering machine learning-driven recruitment tech, which uses algorithms to narrow down a candidate pool by leveraging historical data to draw connections among previously successful candidates.

As efficient as such an approach can be, it isn’t without its pitfalls. “While the intention may be to ascertain those who will be capable of doing the job well and fit within the company culture, past interviewers may have consciously or unconsciously weeded out candidates based on protected characteristics (such as age, sexual orientation, gender, or ethnicity)…in a way which would be deeply unacceptable today,” the report reads.

In other words, if bias — or even flat-out (illegal) discrimination — is hardcoded into a company’s data, it will only be amplified by the introduction of AI. This can be a difficult obstacle to overcome, but the Committee is adamant that doing so is necessary to ensure that AI “operates on the principle of fairness.”

Placing a Premium on Transparency

While the testimony provided to the Committee was wide-ranging and, at times, contradictory, nearly every expert “highlighted the importance of making AI understandable to developers, users, and regulators.” According to the Committee, this entails a high degree of “explainability” — that is, “AI systems [that] are developed in such a way that they can explain the information and logic used to arrive at their decisions.”

At Albert™, the world’s first fully-autonomous AI marketing platform, explainability — or, as we call it, transparency — has been at the heart of what we do since day one. Using our Inside Albert feature, marketers can take a peek “behind the scenes” of each and every campaign Albert is managing.

There’s still much to be done to prepare society at large for an AI-driven future, but with a tool like Albert, marketers have the chance to prove that, at least in certain industries, the future is already here.

by Noa Segall
Head of Search & Display Data Research

07/13
2018

4 Steps to Transform Your Agency

07/13/2018

4 Steps to Transform Your Agency

As digital media finally overtakes traditional media in terms of revenue, ad agencies need to reconsider how they approach nearly every aspect of their work.

As digital media finally overtakes traditional media in terms of revenue, ad agencies need to reconsider how they approach nearly every aspect of their work.

According to Ad Age’s Agency Report 2018, 2017 marked the first year in which digital work accounted for more than half of American ad agencies’ revenues. Digital tasks comprised 51.3% of the work agencies conducted last year — nearly double the share they claimed in 2009.

Digital media’s relatively rapid rise has had countless, dramatic repercussions in the professional world, but perhaps none more disruptive than the democratization of agency work. A recent Research Intelligencer study found that the ad agency holding companies WPP, Omnicom, Publicis, Interpublic, and Dentsu now command only half of all global advertising and marketing revenue, a far cry from the near-total dominance of the industry that first earned them the moniker “the Big 5.”

The Big 5’s collective revenue growth rate has decreased from 4% in 2015 to 3% in 2016 to a mere 0.9% in 2017. And while there are numerous factors driving this stagnation, smaller agencies’ ability to navigate the unpredictable, ever-changing digital landscape is foremost among them.

Of course, smaller doesn’t always mean better — even in the digital age — and non-Big 5 agencies must also make a concerted effort to constantly evolve their businesses to keep up with the times. With that in mind, I’ve drawn up a list of four steps any agency can take to remain competitive in today’s digital-first advertising and marketing landscape.

1. Get Creative with Your KPIs

In many ways, modern digital marketing is remarkably different from traditional print, radio, and out-of-home marketing. As an agency, adapting to this sea change requires a collaborative approach aimed at rethinking long-held industry maxims, including those that linger from the early days of digital marketing. A new marketing world demands new marketing ideas, and agencies shouldn’t hesitate to consider and adopt radical new business propositions.

For instance, click-through rates on banner ads or vague measures of social media engagement (think: “Likes” or “Shares”) might not be the ideal KPIs for measuring a client’s goals, and an agency shouldn’t hesitate to drop such standard metrics in favor of more performance-oriented KPIs like return on ad spend.

2. Reimagine Your Client Relationships

To stand out from competitors, agencies must change not only the way they measure success, but also the very nature of their client relationships. The “client serves brief, agency delivers solution” model isn’t optimized for the digital age, and agencies need to start thinking more in terms of dynamic partnerships and less in terms of assembly line-style production and hourly rates.

This imperative is particularly pressing when it comes to transparency. The Association of National Advertisers’ 2016 Media Transparency Report found that “numerous non-transparent business practices, including cash rebates to media agencies, [are] pervasive in the US media buying ecosystem.” An agency that spends its clients’ ad budgets according to such incentive structures rather than its clients’ best interests undermines the entire purpose of an agency partnership. In other words, this approach is the epitome of what not to do to get ahead.

As stated earlier, a dynamic partnership requires your agency to focus as much on KPIs and creative output as it does on project budgets and timelines.

3. Invest in Sophisticated Data Analytics

Data is the “raw material” with which digital marketing campaigns are built, and a company’s agency partner is often the only party with access to all of the data necessary to accurately assess both specific ad performance and bigger-picture campaign effectiveness. As such, sophisticated data analytics capabilities have become the mark of an effective agency.

True campaign optimization is only possible once an agency is able to surface and act on predictive insights drawn from real-time, dynamic analytics. However, executing such an analytics program at scale, across numerous client accounts, is all but humanly impossible, which leads us to…

4. Adopt an AI Marketing Tool

A cutting-edge AI tool like Albert™, can help agencies lower their operational expenses and dedicate more time to delivering real value to their clients.

Integrating an AI component into an agency’s operations is not only the first step toward a reliable predictive analytics program, but it will also help the agency become more transparent and more tactically creative.

Stepping into the AI marketing era is a big step indeed, but it’s a necessary one for any group angling to assert itself as a truly digital-first agency.

by Oren Langberg
Lead Sales Engineer

07/06
2018

Waymo Blazing the Trail Towards Driverless Dominance

07/06/2018

Waymo Blazing the Trail Towards Driverless Dominance

By focusing more on its AI-powered self-driving system than on building its own driverless car, Waymo has established itself as a leader in the autonomous driving market.

Though a few troubling accidents have hindered the development of self-driving technology at both Uber and Tesla, not all autonomous automobile innovators are struggling with safety. Major industry player Waymo (called the Google Self-Driving Car Project until a few years ago) has not only managed to steer clear of serious accidents, but taken major steps towards industry dominance.

Focusing on the “Driver”

Unlike many of its competitors, Waymo has little desire to manufacture its own self-driving vehicles. Instead, the company is pouring all of its resources into perfecting an autonomous “driver” — a software-hardware package that has been in development since 2016 — that can be deployed in any properly-outfitted vehicle.

Whereas initiatives like General Motors’ Cruise division have been focused on manufacturing fleets of self-driving vehicles from the ground up, Waymo has chosen to make investments in both the practical and the posh. It has been testing a fleet of self-driving Chrysler Pacifica minivans in Arizona since early 2017. They also recently announced a partnership with Jaguar aimed at bringing as many as 20,000 self-driving luxury vehicles, called I-PACEs, to the road in the near future.

“While we’ve been focused at Waymo on building the world’s most experienced driver, the team at Jaguar Land Rover has developed an all-new battery-electric platform that looks to set a new standard in safety, design, and capability,” says Waymo CEO John Krafcik.

Impressive Progress Being Made

Since its system is manufacturer-agnostic, Waymo has the flexibility to jump on opportunities like the Jaguar partnership and test its Driver in more places. This approach partly explains why its driverless road tests have been so successful. As Mountain View City Manager Dan Rich points out, “Waymo has done extensive vehicle testing on our local streets with a good safety record.”

According to a report Waymo produced for the California DMV, vehicles equipped with the company’s autonomous driving system drove 352,545 miles on California roads from December 2016 to November 2017. During these road tests, Waymo recorded only 63 “disengagements” — defined by the California DMV as “a deactivation of the autonomous mode when a failure of the autonomous technology is detected” — amounting to one incident every 5,596 miles driven.

The Power of High-Volume Virtual Testing in Machine Learning

Real-life road tests are obviously critical, but as Waymo Lead Software Engineer James Stout explains, the vast majority of Waymo tests are conducted in a virtual environment.

“Each day, as many as 25,000 virtual Waymo self-driving cars drive up to 8 million miles in simulation, testing out new skills and refining old ones,” Stout writes. “With simulation, we can turn a single real-world encounter — such as a flashing yellow left turn [light] — into thousands of opportunities to practice and master a skill.”

All told, Stout estimates that Waymo “drivers” navigated over 2.5 billion virtual miles in 2016 alone, “miles far richer and more densely packed with interesting scenarios than the average mile of [road] driving.”

Both the company’s “drivers” — which are designed around sophisticated machine learning algorithms — and its engineers learn new things each time a test is conducted, making high-volume virtual testing a crucial element of Waymo’s pursuit of driverless perfection.

A Proven Artificial Intelligence System

Ultimately, the underlying mechanics of Waymo’s “driver” improvement process are not unlike those at work in Albert™, the world’s first fully autonomous artificial intelligence marketing platform.

As soon as Albert is fed a company’s historical marketing data and its creative materials for a new campaign, he conducts a wide variety of microtests based around thousands of variables. Like Waymo’s virtual simulations, these microtests help Albert figure out what works, what doesn’t, and what changes need to be made to the company’s marketing practices.

With Albert’s help, a company has the power to execute a marketing campaign across multiple channels and audience segments at superhuman speeds, guaranteeing a higher, more consistent return on ad spend than is possible with a team of human marketers alone.

To learn more about how AI is changing the future across industries download our latest eBook, The Top Movers & Shakers In Autonomous AI Today.

by Noa Segall
Head of Search & Display Data Research