Ya Know What’s Really Scary about AI? The Giants of Tech Are Driving AI Ethics

Ethically-challenged tech corporations are steering what AI Ethics standards, research and future regulations look like.

A Note from RedTail’s Kate Kaye on Why Accountability and Transparency Matter in the AI Ethics Movement

“Who is writing the rules for AI Ethics? It is the big companies. They are going to kill us.”

Ozcan Cikmaz turned to me following an IBM presentation about ethical AI design standards held at a conference in September in San Francisco. He was visibly agitated and spoke a little too loudly for a business setting. Developing AI is the Turkish technologist’s livelihood; he didn’t want to see his and other AI startups squashed by regulations defined by firms with little concern for powerless competitors.

Cikmaz is an AI startup entrepreneur and as such may have different reasons than many of us for criticizing AI ethics guidelines devised by the giants of tech. But what about everyday citizens? Why should we trust the very same entities that have made front page headlines for involvement in data privacy infringement, sexual harassment, censorship, manipulative AI, biased hiring and unfair labor practices?

When Cikmaz and I spoke, it had only been about a month since I’d decided to build a reporting beat around AI Ethics. Today, six months into my deep AI Ethics dive, I’m in Atlanta for another AI event, this one dedicated to discussion around ethical AI concepts. ACM FAT is the Association of Computing Machinery-sponsored Fairness, Accountability and Transparency Conference.

What Corporate Ethics Look Like Behind Closed Doors

zombie_bizman_squareDespite what seem like obvious contradictions, it is these corporations that are steering what AI ethics standards and future regulations look like and how processes enabling ethical AI will be implemented.

The corporate impact is subtle, gradual, and taking place mostly away from the public eye. In November, here in RedTail, I exposed how Amazon, Facebook, Google and the software industry lobby worked behind-the-scenes to ensure that algorithmic transparency was squelched in the new NAFTA deal. Trade experts told me we can expect to see this brand-new language, which prohibits governments from forcing software owners to provide source code or algorithms, show up in future trade deals.

So, while human rights advocates and academics – and even some AI companies — demand algorithmic transparency to enable explainable and accountable AI, the tech industry pushed for trade language that prevents it.

Google AI Principle #4 is “Be Accountable to People.” Yet, already, the company is fighting new liability rules for AI. Buried at the end of a recent Google white paper about AI governance, the firm cautions that, “while such approaches might indeed strengthen the legal position of the end users of AI systems,” strict liability rules under consideration in the EU “would bring increased exposure to legal uncertainty, as it would mean that anyone involved in making an AI system could be held liable for problems they had no awareness of or influence over.”

Ultimately, we may not need new laws for AI liability. Yet, is it any wonder a company developing the technologies that one day might lead to injury in an autonomous vehicle pileup, inappropriate medical treatment or biased hiring decisions, is already laying the groundwork to push against laws that could penalize it for AI gone wrong?

So, while human rights advocates and academics – and even AI companies — demand algorithmic transparency to enable explainable and accountable AI, the tech industry pushed for trade language that prevents it.

AI law professor Ryan Calo expressed skepticism toward Google’s use of the term “governance” in the paper. He noted on Twitter that “the use of the terms ‘governance’ and ‘ethics’ subtly gesture toward a diminished role for government in channeling AI in the public interest.”

Meanwhile, despite glowing media praise for Microsoft’s support of facial recognition regulations, here in RedTail I spotlighted President Brad Smith’s under-reported explanation, indicating not-entirely-altruistic motives. Speaking at a Brookings Institution event in November, he stated, “We believe in the importance of a law not because we are behind, but because we are ahead.” Indeed, he was referring to Microsoft’s top rankings in an influential commerce department evaluation of facial recognition algorithms. The firm wants to help guide that regulation in part to ensure that all facial recognition tech including that of its potentially less-accurate competitors is subject to similar accuracy testing.

To the algorithm accuracy victor go the contract spoils?

Despite signs of ulterior motives, the New York Times recently has painted Microsoft with a virtuous brush, stating that “Microsoft has been at the vanguard of warning about the potential negative effects of technology, like privacy or the unintended consequences of artificial intelligence.”

Another recent NY Times proclamation: “With many of its rivals under fire, Microsoft has aggressively tried to position itself as the moral compass of the industry. Company executives have been outspoken about safeguarding users’ privacy as well as warning about the potential discriminatory effects of using automated algorithm to make important decisions like hiring.”

Microsoft wants to help guide regulation in part to ensure that all facial recognition tech including that of its potentially less-accurate competitors is subject to accuracy testing. To the algorithm accuracy victor go the contract spoils?

AI Ethics as Cause Marketing

It’s easy to fall prey to corporate press release trumpeting, self-congratulatory blog posts and feel-good taglines. The reality is that even a movement as esoteric as AI Ethics is becoming productized and co-opted by surface-level corporate brand messaging. For many companies, much like green- and pink-washed slogans and cause marketing, ethical AI is smart from a business and branding perspective.

Accenture and IBM already are stepping in to monetize the AI ethics gap. Each has introduced “fairness” tools and services intended to detect gender or racial bias in AI systems. Hiring software maker Opus AI claims to prevent unfair hiring practices through a system that hides gender, race, ethnicity and age “making bias impossible.” But can these systems facilitate increased diversity and inclusion or do they reinforce entrenched societal inequities reflected through other signals such as names and education credentials?

In the end, business decisions may trump ethical guidelines. The firm’s corporate banking, insurance and software clients are based around the world in locales with varying ethical standards. If clients in India or Australia disagree with Accenture’s determination that a data set is producing biased results, the client wins. “We highlight the skews,” explained Kishore Durg, Accenture’s senior managing director and growth and strategy lead when we spoke in September. “We let them choose to make those decisions.”

As MIT Media Lab researchers Nick Obradovich, William Powers, Manuel Cebrian and Iyad Rahwan expertly put it in a recent Boston Globe op-ed,Executives who make idealistic public pronouncements often act very differently when they’re behind closed doors choosing between profits and the public good.

Many of the researchers now working for corporate-linked groups or supported by corporate funding genuinely care about ethical AI. However, corporate funding often encumbers researchers’ abilities to do their work or present data and ideas in a fully transparent manner.

It’s worth noting who and what gets media attention. When the EU in December published its 29-page draft of ethical AI guidelines, the media barely noticed. Contrast that with the hype around Google’s brief blog post in June listing its seven vague principles for AI, or the attention Microsoft garnered when it proclaimed its support for facial recognition regulation.

(For the record, I am not the only reporter drawing attention to this issue. As UK journo and researcher Robert David Hart wrote recently, “we should be wary of letting tech companies become the only or loudest voice in the discussion about the ethical and social implications of AI.”)

Who’s Funding AI Ethics Research?

Partnership on AI, one of the most amplified voices among a growing cadre of AI ethics groups, was formed by Google’s DeepMind, Amazon, Facebook, IBM and Microsoft. Now Baidu, a Chinese tech giant that came in last place in a worldwide ranking of corporate accountability, has joined the organization, which also includes a variety of advocacy groups, non-profits and other corporations. Meanwhile, Microsoft and Google researchers lead another prominent group guiding the AI Ethics conversation, the AI Now Institute.

Many of the researchers now working for corporate-linked groups or supported by corporate funding genuinely care about ethical AI. However, corporate funding often encumbers researchers’ abilities to do their work or present data and ideas in a fully transparent manner.

Google’s funding of an AI lab at Princeton is “a double-edged sword,” argued Daily Princetonian associate opinion editor John Ort recently. “On one hand, the company fosters legitimate and cutting edge collaboration with universities. On the other, when its dubious conduct comes into question, Google utilizes academia for its own purposes. Through its philanthropic presence, the company gains access to a vast network of legal and scientific scholars.”

Driven by Publicly-traded Multinational Values

I’ve covered corporate and political digital data use and privacy issues for most of my twenty-year reporting career. Listening to some government officials in congressional hearings discuss technology concepts can be like nails on a chalkboard to anyone who actually understands the intricacies of the subject matter. Yet is that a good enough reason for us to allow profit-driven corporations to hijack tomorrow’s AI regulations?

Ultimately, corporations, by their nature, are not altruistic. When SAP’s own “principles for artificial intelligence,” suggest “We are driven by our values,” why should we expect the values of a multibillion-dollar company to be the same as those of individual citizens or human rights advocates? Can thoughtful development of ethical AI win when there is a race for AI superiority?

Six months into this AI Ethics journalism endeavor, these are the questions I ask everyday when reporting here on RedTail and elsewhere. Accountability and transparency matters in the movement guiding AI Ethics, too. Thanks for reading RedTail and thank you for your support.

 

2 comments on “Ya Know What’s Really Scary about AI? The Giants of Tech Are Driving AI Ethics

    • Moshe – I’d love to know more about how Rice is integrating ethics education into computer science and engineering and related programs. Can you get in touch? Kate@RedTailMedia.org.

Thanks for leaving a thoughtful comment that advances the conversation.

This site uses Akismet to reduce spam. Learn how your comment data is processed.