Big Tech’s AI Ethics Nonprofit Signals Business-Aligned Motives and Lobbying Mission

It’s been nearly three years since Amazon, Facebook, Google, IBM and Microsoft formed Partnership on AI. The results of its long-awaited work are surprising as they are revealing.

What you’ll learn in this article:

  • Case studies just released by PAI intended to address labor impact of AI take a management perspective, focusing on productivity and investment.
  • The case studies feature three companies with direct ties to PAI’s core founding member corporations.
  • The group’s only report, just published, disregards the pressing societal impacts enabled by AI-based technologies developed by the organization’s core founders.
  • Despite its declaration that PAI will not engage in lobbying, the group recently has hired people with US government and government-linked backgrounds and announced a mission to engage with policymakers

It was nearly three years ago when Amazon, Facebook, Google, IBM and Microsoft formed nonprofit group Partnership on AI with a mission to research and recommend best practices for ethical AI. Over the past few days, PAI finally has pulled back the curtain and revealed the work some of its more than 80 members, which also include organizations such as the ACLU and Amnesty International, have been doing in that time.

The results are as surprising as they are revealing.

Today, PAI introduced a collection of case studies intended to highlight the impact of AI on labor and the economy – an important set of issues in the world of AI ethics. However, the management-focused case studies, a product of PAI’s AI, Labor and the Economy working group, involve companies with a variety of business ties to PAI’s core founders as well as to a key member of the working group.

The case studies come on the heels of a report published by PAI which itself deserves scrutiny for its disregard of the more pressing societal impacts enabled by AI-based technologies developed by the organization’s core founders. Meanwhile, a look at the backgrounds of the people hired recently by PAI seems incongruous with its original promise not to lobby government or policymakers.

AI and The Future of…Management?

Particularly in the nonprofit realm, typical discussions around future of work, the label given to a collection of issues associated with the impact of AI on workers, jobs and the economy, focus more on labor impact than they do on business concerns. PAI founding member Amazon, for instance, has drawn scrutiny recently for employing an automated system in its fulfillment centers to make decisions that could lead to employee termination, according to a recent Verge report.

Yet, rather than focusing on use of surveillance AI in the workplace or concerns surrounding exploited overseas labor employed to annotate AI training data, the case studies published by PAI today highlight topics like productivity gains and AI investment, subjects that would be at home on a corporate website or in a tech business seminar.

“On the case studies, they do indeed take a management-driven perspective which I believe the case studies openly acknowledge.”
– Peter Lo, PAI’s senior communications manager

In fact, as noted in a statement sent to RedTail by PAI, the case studies tout the “ROI of AI,” business-speak for return-on-investment. In other words, while the organization addresses some implications of AI on workers, the key takeaway is aimed at management: invest in AI for your business (and it just so happens the founding members of PAI offer a variety of business-aimed AI tools).

“On the case studies, they do indeed take a management-driven perspective which I believe the case studies openly acknowledge,” said Peter Lo, PAI’s senior communications manager in statement sent to RedTail. He added, “We are also looking to do precisely more case study work from the worker’s perspective as these initial set of case studies are limited in their scope.”

Case Studies and Corporate Ties

Meanwhile, the corporations featured in the case studies also raise an eyebrow. Despite PAI’s 501c3 status, two of the firms featured in the case studies are curiously connected to corporate partners and clients of core PAI founders. Multinational Tata Group-owned Tata Steel and Axis Bank, both featured and both headquartered in Mumbai, India, are currently or have been corporate partners or clients of PAI’s core founding companies.

Just recently Amazon partnered with Axis Bank to offer peer-to-peer payment services, for example. The company has also worked with another Tata subsidiary, Tata Motors. And, IBM has called Tata Group’s Tata Consultancy a client.

“The subjects of these case studies were not chosen randomly and were sourced through existing relationships within PAI’s AI Labor and Economy Working Group.”

Tata Steel and Axis Bank also have both worked with McKinsey and Company, the massive corporate consultancy that co-wrote the case studies. Michael Chui, co-chair of the PAI labor working group is also partner at the McKinsey Global Institute.

Then there’s the third case study, featuring California’s biomedical research firm Zymergen. The case study methodology itself acknowledges that McKinsey is among the firms that have invested $574M in Zymergen. Another interesting wrinkle: the research firm’s senior director of IP strategy and commercial litigation, John LaBarre, previously served as a senior member of Google’s patent team.

“[T]he subjects of these case studies were not chosen randomly and were sourced through existing relationships within PAI’s AILE Working Group,” notes the case studies documentation.

Five Tech Giants, a Report, and an Elephant in the Room

The case studies are part of a collection of content just published by PAI, the most the group has put out in the more than 2.5 years since it formed with a splashy announcement covered in media around the world. Since then, the tech giants that founded PAI have been criticized or investigated for enabling election disinformation campaigns, creating discriminatory facial recognition systems, building problematic content algorithms, using consumer data indiscriminately and even for taking questionable approaches to AI ethics advisory.

Observers may have expected PAI to take cues from other AI ethics groups or trade associations by revealing guidelines or practical advice its members such as Amazon and IBM can employ to ensure non-discriminatory facial recognition systems.

As part of this press-friendly onslaught, PAI unveiled a research report last week. Observers may have expected PAI to take cues from other AI ethics groups or trade associations by revealing guidelines or practical advice its members such as Amazon and IBM can employ to ensure non-discriminatory facial recognition systems. Or, perhaps, they may have anticipated PAI to develop practical ways algorithmic media puppet-masters Google and Facebook could work together to combat the scourge of fake news, deep fake videos and dissemination of damaging white-nationalist rhetoric. None of that happened.

Instead, among PAI’s first substantial work is a report recommending that policymakers avoid using criminal recidivism risk assessment tools for decisions to incarcerate. Although risk assessment tools have been the subject of great scrutiny among criminal justice watchdogs for unfairly attributing high-risk recidivism scores to blacks, the choice to focus the organization’s first major project on technology that has little connection to the sometimes-maligned technologies that actually create revenue streams for PAI’s core founders seems convenient, and raises questions.

When asked by RedTail why, if PAI is interested in AI employed for law enforcement, the group hasn’t addressed facial recognition systems which at least two of its founding members, Amazon and IBM, offer, PAI’s spokesperson bucked the question:

“This paper is narrowly scoped to the use of risk assessment tools in the criminal justice system. The report’s requirements reflect the fact that detention decisions are very serious matters and it is governments that are using these tools. We have other projects in process to address best practices in other domains that are more relevant to large technology companies. These include a new project which we are just beginning, such as ABOUT ML, a project to begin defining best practices in machine learning.”

About that Lobbying Thing…

lisadyer_pai
Lisa Dyer, director of policy for “no-lobbying” nonprofit PAI

When PAI launched in September 2016, it made a point of declaring that the group “does not intend to lobby government or other policymaking bodies.” So, it’s worth noting the few activities PAI has made public, and that’s hiring executives with backgrounds one may not expect from an AI ethics nonprofit that intends not to lobby. In March, PAI brought on Lisa Dyer as director of policy.

“In this role, Lisa will lead PAI’s policy activities to collaborate closely with policymakers and Partners. As a leader with extensive experience in technology policy formulation, and in translating complex concepts for diverse audiences, Lisa will represent PAI’s work to a global audience of stakeholders and the policymaking community.”

In early April, RedTail asked to speak to Dyer about her new role, but did not get a response from PAI.

Lisa will lead PAI’s policy activities to collaborate closely with policymakers and Partners.

Dyer has a background serving in the U.S. State Department as the director, Office of Intellectual Property Enforcement at the Bureau of Economic and Business Affairs. There, she ensured intellectual property protections for US companies in foreign markets. It’s no secret IP is a major concern of firms operating in the AI and broader tech space, both within the US and when dealing with other countries.

Dyer is not the only new hire at PAI who could help in the lobbying department. Samir Goswami, PAI’s chief operating officer, joined in November. According to a PAI press release, Goswami “managed a portfolio of U.S. Government contracts with the Agency for International Development, State Department, Department of Defense and Intelligence Communities, and developed their rule of law business line” while a sales director at LexisNexis.

Ultimately, PAI’s approach to nonprofit work follows a model we see throughout today’s corporate-funded nonprofit and advocacy world, one that combines a promise of positive social “Impact” with opportunities to improve the bottom line of the businesses involved. Indeed, PAI is hiring an “engagement lead” to oversee the group’s events, including its annual two-day All Partners Meeting, which will surely be a ripe opportunity for networking.

About Kate Kaye

Kate Kaye is a journalist who tells deeply-reported stories with words and sound. Her work has been published in Protocol, MIT Technology Review, CityLab, OneZero, Fast Company, and many other outlets, and it’s been heard on NPR, American Public Media’s Marketplace and other radio programs and podcasts. Kate has been interviewed about her work across the media spectrum from Fox’s Stossel show to Slate and NPR’s Weekend Edition. Follow Kate Kaye on Twitter: https://twitter.com/KateKayeReports

0 comments on “Big Tech’s AI Ethics Nonprofit Signals Business-Aligned Motives and Lobbying Mission

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.