What you’ll learn in this article:
- The global IEEE trade group has put its stamp of approval on recommendations for ethical AI development after three years of work.
- The framework’s mission stands in stark contrast with the profit-driven motives dominant at most tech firms engaged in the oft-referenced AI Race.
- The guidance incorporates considerations inspired by western philosophies as well as eastern and African tradition.
- The recommendations could serve as an example of what firms like Boeing could do to develop ethically-sound autonomous systems.
Two deadly Boeing 737 Max plane crashes in the last six months have sparked urgency behind the need to ensure complex automated software systems are designed to be transparent enough to explain themselves to humans, and allow for human control and auditability. Those are just a few components of official recommendations for ethical design of autonomous systems unveiled today by global trade group, the Institute for Electrical and Electronics Engineers, a.k.a. the I-Triple-E.
This framework for incorporating ethical approaches to developing AI systems stands in stark contrast with the profit-driven motives dominant at most tech firms engaged in the oft-referenced AI Race. The IEEE’s Ethically-Aligned Design recommendations, which incorporate ethical philosophies from the east, west and Africa, call for designers of autonomous and intelligent systems to prioritize metrics aligned with human well-being rather than suppressing them amid goals of profit and exponential growth.
“Whether our ethical practices are Western (e.g., Aristotelian, Kantian), Eastern (e.g., Shinto, 墨家/School of Mo, Confucian), African (e.g., Ubuntu), or from another tradition, honoring holistic definitions of societal prosperity is essential versus pursuing one-dimensional goals of increased productivity or gross domestic product (GDP),” states the guidance document, published today.
The group’s recommendation to buck the norm of tech development-for-the-sake-of-“innovation” — which some see as a euphemism for exponential growth, profit and market dominance — is especially poignant.
Engineers developing corporate AI systems around the world are IEEE members, which makes the group’s recommendation to buck the norm of tech development-for-the-sake-of-“innovation” — which some see as a euphemism for exponential growth, profit and market dominance — especially poignant.
“The mindset of ‘it’s going to hinder innovation,’ I think is actually synonymous with ‘it will keep us from getting to market because we feel the pressure to do so because of a single bottom line,” said John Havens, executive director of the IEEE’s committee overseeing development of the 294-page tome. (Don’t worry – there’s a much-shorter condensed overview available.)
Aristotle, Buddha and Ubuntu Community
A product of three-years of work involving revisions of two previously-proposed versions and input from at least 400 ethics and technology academics, AI researchers, engineers, and privacy, policy and legal professionals, the IEEE recommendations are based on a set of eight general principles. In addition to human rights and well-being, the principles include transparency, accountability, effectiveness, competence and “awareness of misuse” in addition to “data agency,” giving individuals control over their data.
This first edition was devised by participants mainly from the US and Europe. However, Havens said 40 were from China, 30 from Japan, around twenty from India, and fewer than ten each from Africa, Brazil and Korea.
“Shinto tradition posits that everything is created with, and maintains, its own spirit (kami) and is animated by that spirit—an idea that goes a long way to defining autonomy in robots from a Japanese viewpoint.”
– IEEE’s Ethically Aligned Design
While the IEEE aims to broaden its globally-minded approach to development of future ethics guidance, the group took care to incorporate non-western philosophies in its new recommendations. For instance, the document notes that community-centric Ubuntu philosophy says “a person is a person because of other people,” and suggests the tradition “may offer a way of understanding the impact of [autonomous and intelligent systems or A/IS] on humankind, e.g., the need for human moral and legal agency; human life and death decisions to be taken by humans rather than A/IS.”
In discussing Japan’s Shinto tradition, which holds that natural and artificial objects all contain their own spirit, the IEEE notes, “Shinto tradition is an animistic religious tradition, positing that everything is created with, and maintains, its own spirit (kami) and is animated by that spirit—an idea that goes a long way to defining autonomy in robots from a Japanese viewpoint.”
Methods for Ethics-by-Design
The belief of the IEEE and others devising guidance for ethical AI systems, is that design, development and implementation that incorporates core values can avoid potential injury, death, discrimination and other risks associated with these decision-making technologies.
AI ethics researchers in academia, corporate-aligned researchers and even military-funded researchers are hard at work developing technical ways of enabling increasingly opaque AI systems to be more fair, transparent, explainable and accountable. Other groups including the EU have provided proposed guidance for integrating such values as technologies are built, rather than patching them on as afterthought.
“It is still true that systems that don’t explain themselves to human beings who interact with them are dangerous.”
– Eben Moglen, Software Freedom Law Center, Interviewed on NPR
The IEEE’s document recommends several methods for this ethics-by-design approach. In addition to more robust ethics education, the group calls on corporations to establish ethics leadership and “identify stages in their processes in which ethical considerations, ‘ethics filters’, are in place before products are further developed and deployed.” They might also give employees — the people contributing to the development of these tools — the ability “to raise ethical concerns in day-to-day professional practice.”
When corporations are testing new autonomous systems, suggested Havens, they should “re-frame it so you’re not thinking about it as compliance and risk. Think about it as R&D.”
Boeing Has an Internal Ethics Process – Why Didn’t It Work?
Of course, some corporations have ethics boards. Following scandal in 2003 involving stolen trade secrets and government contracts, Boeing itself established an internal ethics and compliance organization, hired ethics advisors and even created a hotline that employees can call to blow the whistle on ethical concerns. It is unknown whether employees of the plane manufacturer raised alarms about a lack of confidence in the 737 Max’s software or its minimal pilot training.
From Boeing’s ethics and compliance language:
Boeing maintains policies and procedures to encourage employees to report concerns and seek guidance, using confidential and, when preferred, anonymous methods, including contacting local ethics advisors, using toll-free phone numbers and accessing web-based portals.
On the March 22 episode of NPR’s Morning Edition, Columbia Law Professor and Founding Director of the Software Freedom Law Center, Eben Moglen, said something as immediately relevant to the Boeing situation as it was prescient in relation to potential dangers of tomorrow’s autonomous and intelligent systems:
“What we’re looking at in the case of some aerodynamic software taking over from pilots without telling them, is an example of why, even if you didn’t think this had anything to do with politics, it is still true that systems that don’t explain themselves to human beings who interact with them are dangerous.”
Moglen was talking about Boeing’s failure to communicate intricacies of the plane’s software to the pilots flying them, but he just as easily could have been talking about risks associated with driverless cars, eldercare robots, autonomous drones or AI-based medical diagnostic platforms that do not or cannot explain what’s behind their decisions.
The IEEE recommendations address such concerns by calling for stakeholders to adopt rules and standards that ensure effective human control over AI-based decisions, and enable technical dependability by establishing “validation and verification processes, including aspects of explainability.”
Recent investigative reports likely have only scratched the surface of what went wrong at Boeing. It appears, however, that the company may have fallen prey to profit-driven pressure in its rush to get the 737 Max system, and the software that operates it, to market. Whether corporations building other autonomous and intelligent systems can concoct a recipe that works to prevent future AI risks, within a culture driven by shareholder and VC-exit goals, remains to be seen.