Blog

07/20
2018

UK Government Releases Framework for Development of Ethical AI

07/20/2018

UK Government Releases Framework for Development of Ethical AI

A recent report issued by the House of Lords lays the groundwork for a regulatory regime governing artificial intelligence in the UK.

On April 16, the House of Lords Select Committee on Artificial Intelligence released a comprehensive report outlining the future of AI in the UK. Entitled AI in the UK: Ready, Willing, and Able?, the report grapples with the ongoing disconnect between technological and legal changes in the AI space.

“AI is not without risks, and the adoption of the principles proposed by the Committee will help mitigate these,” says Committee Chairman Lord Tim Clement-Jones. “An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

Without prescribing a specific regulatory regime, the report provides a broad ethical framework for AI development. This framework is comprised of five overarching principles:

AI should be developed for the common good and benefit of humanity;
AI should operate on principles of intelligibility and fairness;
AI should not be used to diminish the data rights or privacy of individuals, families, or communities;
All citizens have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside AI; and
The autonomous power to hurt, destroy, or deceive human beings should never be vested in AI.

Grounded in the synthesis of extensive expert testimony, these principles are intended to both shape the future of AI in the UK and act as a corrective to a number of early AI “misuses.”

Building on GDPR

Apropos of any discussion about AI, the Committee dedicates a great many pages to data concepts both old (open data, data protection legislation, etc.) and new (data portability, data trusts, etc.).

For instance, despite its forthcoming “Brexit” from the European Union, the UK has committed itself to domestic enforcement of the EU’s recently implemented General Data Protection Regulation (GDPR). As the Committee points out, this means that many organizations will be required “to provide a user with their personal data in a structured, commonly used, and machine-readable form, free of charge.” GDPR stipulations like these are not uniquely applicable to AI, but moving forward, they will have a significant impact on the development of AI-driven technologies both in the UK and abroad.

Counteracting Hardcoded Bias

On a more AI-specific level, the Committee writes at length about one of AI’s most persistent ethical dilemmas: bias. For example, the UK is home to a number of companies offering machine learning-driven recruitment tech, which uses algorithms to narrow down a candidate pool by leveraging historical data to draw connections among previously successful candidates.

As efficient as such an approach can be, it isn’t without its pitfalls. “While the intention may be to ascertain those who will be capable of doing the job well and fit within the company culture, past interviewers may have consciously or unconsciously weeded out candidates based on protected characteristics (such as age, sexual orientation, gender, or ethnicity)…in a way which would be deeply unacceptable today,” the report reads.

In other words, if bias — or even flat-out (illegal) discrimination — is hardcoded into a company’s data, it will only be amplified by the introduction of AI. This can be a difficult obstacle to overcome, but the Committee is adamant that doing so is necessary to ensure that AI “operates on the principle of fairness.”

Placing a Premium on Transparency

While the testimony provided to the Committee was wide-ranging and, at times, contradictory, nearly every expert “highlighted the importance of making AI understandable to developers, users, and regulators.” According to the Committee, this entails a high degree of “explainability” — that is, “AI systems [that] are developed in such a way that they can explain the information and logic used to arrive at their decisions.”

At Albert™, the world’s first fully-autonomous AI marketing platform, explainability — or, as we call it, transparency — has been at the heart of what we do since day one. Using our Inside Albert feature, marketers can take a peek “behind the scenes” of each and every campaign Albert is managing.

There’s still much to be done to prepare society at large for an AI-driven future, but with a tool like Albert, marketers have the chance to prove that, at least in certain industries, the future is already here.

by Noa Segall
Head of Search & Display Data Research