Those EU Guidelines for Ethical AI? Here’s What Stood Out

The initial round of guidance from Brussels hinges on the notion of Trustworthy AI that has been developed with “ethical purpose” in mind. Here's RedTail's take.

What You’ll Learn in this Article:

  • Draft guidelines for AI Ethics from the European Commission focus on the concept of Trustworthiness and feature many principles seen in other AI Ethics guidance from throughout Europe.
  •  The draft features some less-commonplace concepts including Design for all, Reproducibility and Auditability
  • The 52-member group who devised the guidance is comprised almost entirely of white people despite the draft’s emphasis on diversity in AI development teams.
  •  The EU Commission is accepting comments on the draft until January 18.

The European Commission unveiled a draft for AI Ethics guidance on December 18. While the working document on Ethics Guidelines for Trustworthy AI is certainly not the only government paper or set of proposed guidelines for ethical AI to come out of the region, the release garnered a minimal amount of news coverage in English-language media inside and outside of the US.

Considering the power of the commission and the size of the “expert group” who devised the draft guidance, the apparent lack of interest in a set of principles that could influence AI ethical standards across the globe, coming from the same body that gave us the globally-scrutinized GDPR, is surprising if not disillusioning. The draft principles, not intended as official policy, are open to comments until January 18 and will be formally endorsed in March.

eu_ethicsguide_full

This initial round of guidance from Brussels hinges on the notion of Trustworthy AI. Trustworthy AI is defined therein as AI that has been developed with “ethical purpose” in mind, meaning the development, deployment and use of AI respects fundamental rights and applicable regulation. Trustworthiness, according to the EU group, also entails trust in the robustness and reliability of the technology itself.

“AI will continue to impact society and citizens in ways that we cannot yet imagine…only when the technology is trustworthy will human beings be able to confidently and fully reap its benefits.”

In October, Politico interviewed Pekka Ala-Pietilä, former Nokia president and member of the EU’s “High Level Expert Group on Artificial Intelligence,” the assembly of 52 people chosen to craft the guidelines. At the time, he indicated that the group would not push for AI Ethics regulation. “We foresee that it might be actually reasonable to think of the regulation as a common law-based phenomenon, where you regulate ‘ex-post,’ you also give room for self-regulation and you … understand when to regulate and when not to regulate,” he told the publication.

An Ethics-by-Design Approach

In all, the guidelines put forth ten core rights, principles and values necessary for Trustworthy AI:

  • Accountability
  • Data Governance
  • Design for all
  • Governance of AI Autonomy (Human oversight)
  • Non-Discrimination
  • Respect for (and Enhancement of) Human Autonomy
  • Respect for Privacy
  • Robustness
  • Safety
  • Transparency

Unlike other primarily theoretical AI Ethics documents, the EU draft features a section dedicated to practical means of implementing methods for incorporating ethical values throughout the AI development process. In other words, it suggests ways to turn ethics-by-design into standard practice.

The draft guidelines suggest ways to turn ethics-by-design into standard practice.

Here, the draft suggests questions that AI technologists and designers might pose throughout the development process, categorized in relation to the ten core principles outlined. Possible questions include “Is an (external) auditing of the AI system foreseen?” “Is there a clear basis for trade-offs between conflicting forms of discrimination, if relevant?” and “Does the AI system indicate to users that a decision, content, advice, or outcome, is the result of an algorithmic decision of any kind?”

What Stands Out: Reproducible AI, Design for All and a Good Nudge

While many of the concepts put forth by the EU group have been core to other sets of AI ethics principles, such as accountability, respect for human autonomy, data governance and privacy, transparency and non-discrimination (See AccessNow’s excellent overview of other country-specific AI Ethics guidance from Europe), some of the EU group’s suggestions are relatively unique. Here’s what stood out to RedTail:

Design for All
Design for all is defined as just that — systems that are “designed in a way that allows all citizens to use the products or services, regardless of their age, disability status or social status.” The principle alludes to the United Nations Convention on the Rights of Persons with Disabilities, and “implies the accessibility and usability of technologies by anyone at any place and at any time, ensuring their inclusion in any living context thus enabling equitable access and active participation of potentially all people in existing and emerging computer-mediated human activities.”

Of course, many AI systems are designed with very specific purposes and user groups in mind. Consider an AI platform that helps medical doctors make decisions about patient care or a machine learning system used to determine the safety risks on a construction site. It is unclear how this principle would apply in such circumstances.

The EU draft suggests that Trustworthy AI should enable Design for all, or “equitable access and active participation of potentially all people in existing and emerging computer-mediated human activities.”

Reproducibility
Among the other components of the principles not commonly found in other proposed AI Ethics guidance is “Reproducibility.” The EU guidance suggests that the ability to reproduce AI decisions is “a critical requirement in the field,” but notes that “complexity, non-determinism and opacity of many AI systems, together with sensitivity to training/model building conditions, can make it difficult to reproduce results.”

Indeed, similar caveats have been noted in relation to the more common demands for AI that is transparent and explainable. While AI ethicists often call for algorithmic transparency in order to enable systems that can explain why they made decisions, some technologies are so complex that they limit this capability. Proprietary concerns around intellectual property rights are also major hurdles to transparency.

Traceability and Auditability
Other concepts mentioned by the EU but seen less often in other AI Ethics guidance include Traceability and Auditability. The idea here is that AI developers should be able to explain the method used to test a learning-based algorithmic system. The guidelines go on to specifically suggest that “The outcome(s) of or decision(s) taken by the algorithm should be provided, as well as potential other decisions that would result from different cases (e.g. for other subgroups).”

Again, in the case of more complex machine learning systems and those built with neural networks, it may be challenging to determine potential outcomes or decisions a system might make in the future.

Ethical Nudging
The guidance briefly mentions “nudging,” referencing the alerts, notifications or suggestions delivered by mobile apps, recommendation engines, virtual coaches or personal assistants. Think of the fitness app that prods a user to walk 300 more steps that day. In the broader context of favoring human-centric AI, the draft notes that “AI products and services, possibly through ‘extreme’ personalisation approaches, may steer individual choice by potentially manipulative ‘nudging’ and suggests that systems must ensure “that the overall wellbeing of the user as explicitly defined by the user her/himself is central to system functionality.”

“AI products and services, possibly through ‘extreme’ personalisation approaches, may steer individual choice by potentially manipulative ‘nudging,’ ” the draft states.

An Ironic Call for Diversity

It’s worth noting the EU draft guidance also raises the issue of diversity and inclusion on AI design teams. “It is not only necessary that teams are diverse in terms of gender, culture, age, but also in terms of professional backgrounds and skillsets,” states the draft. While this is not a novel concept and has been mentioned elsewhere as an important element of ethical AI, it is particularly notable in this case because the group of experts chosen to craft the EU guidelines, while diverse in gender, is remarkably white. A scroll through the faces of its 52 members indicates a dearth of dark skin.

The EU government itself has been criticized for being “so white.”

The EU government itself has been criticized for being “so white.” Meanwhile, while there may not be many AI-experts-of-color based in Europe, RedTail easily found some who are involved in the growing organization, Black in AI.

The final day to provide feedback on the draft AI Ethics guidelines is January 18 – just two weeks from now. The EU’s expert group aims to deliver its related Policy and Investment Recommendations in May 2019.

 

 

About Kate Kaye

Kate Kaye is an award-winning journalist with nearly twenty years of professional reporting experience chronicling the evolution of digital media and technology. One of the first reporters to track how political organizations use digital advertising, Kate is the author of "Campaign '08: A Turning Point for Digital Media," a 2009 book (http://www.amazon.com/gp/product/1441488464/ref=cm_cr_thx_view) covering the digital media efforts of the 2008 presidential campaigns. Kate has appeared on NPR’s On the Media, Weekend Edition Sunday and the Brian Lehrer Show, in addition to Fox’s Stossel Show and CBC Radio.

0 comments on “Those EU Guidelines for Ethical AI? Here’s What Stood Out

Thanks for leaving a thoughtful comment that advances the conversation.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Get the Latest RedTail AI Ethics Stories

We'll only send you emails when we publish new stories or have important information to share. We won't share your data or send sponsor messages.

Copyright 2019 Pandion Media LLC.
All Rights Reserved.