What you’ll learn in this article:
- Despite demands for algorithmic transparency and explainable AI from a wide array of stakeholders, the sheer complexity of neural networks could limit the ability to track all the steps they take to make decisions.
- Proprietary considerations could prevent meaningful AI transparency.
- Efforts to devise a means of explaining decisions made by complicated AI systems are nascent.
It’s more like alchemy than science, they say. Exposing its inner workings would be like revealing a private hideaway. Ask researchers, data scientists and engineers if they think algorithms and the decisions they enable can be illuminated in any meaningful way, and you’re bound to get descriptions like these. Despite the enigmatic quality of algorithms and the often-elusive machine learning processes that produce artificial intelligence-based technologies, demands for algorithmic transparency and explainable AI are reaching a crescendo.
But is it technically possible to unveil these cloaked and increasingly complex computational models? And even if algorithmic transparency is technically feasible, can we expect corporate entities engaged in a worldwide battle for AI domination to divulge this proprietary information in any meaningful manner?
First a simple explanation of what an algorithm is: Algorithms are like recipes for AI, processes that propel artificial intelligence decision-making and problem solving.
The proponents of algorithmic transparency want these steps and processes to be made visible enough to inspect them, particularly when they lead to decisions that have questionable or negative consequences such as denying a job application or keeping someone imprisoned unfairly. And, going a step further, they want AI systems to explain themselves – be they software for determining which potholes to fill or platforms parsing data for the next cancer drug.
If the unlikely jumble of supporters who claim they want transparency is any indication, there is little agreement on how algorithmic transparency or explainable AI should be defined or implemented.
Over the past few years government officials from German Chancellor Angela Merkel to Ajit Pai, the business-friendly FCC Chairman, have called for algorithmic transparency. Advocacy groups including the Electronic Privacy Information Center and trade groups like the Association for Computing Machinery want it. The Institute of Electrical and Electronics Engineers (the massive “I-Triple-E” trade association) suggests that AI product designers should enable domestic robots to explain themselves by equipping them with a “why-did-you-do-that button which, when pressed, causes the robot to explain the action it just took.”
Europe’s sweeping General Data Protection Regulation privacy law includes a right to explanation regarding AI decisions affecting citizens. The city of New York launched a task force this year to figure out a means of evaluating “automated decision systems” for transparency and equity. And, even some corporate-aligned entities such as the Google, IBM and Amazon-led Partnership on AI – often the subject of demands for algorithmic accountability — have said they support it.
Determining exactly who supports what in particular is where we may run into some obstacles to consensus.
How Many Licks Does It Take to
Get to the Center of a Neural Net?
“I’m clearly biased toward the transparency side,” said Jana Eggers, CEO of Cambridge, Massachusetts-based Nara Logics, an MIT-connected startup with an AI platform that helps customers from processing plant engineers to financial analysts make decisions and understand why and how the AI got to that decision. Eggers is a mathematician and computer scientist (and Ironman competitor) with a background in genetic algorithms and expert systems, which are an inherently explainable precursor to today’s more opaque AI which is fueled by machine learning and neural networks.
She’s also worked with that far more sophisticated AI – neural networks. Comprised of extremely large mathematical equations involving tens of thousands of variables, they are connected nodes woven together like the neurons of a human brain. People who understand how these systems are built suggest that their sheer complexity prohibits the ability to track all the steps they take to make decisions.
Because AI development has been oriented around research rather than implementation for so long, Eggers suggested some levels of transparency for explainable AI will be possible now that there is more emphasis on these goals. However, she concluded, “I don’t think every time we’ll always be able to explain why.”
“Deep learning neural networks are more alchemy than science at the moment.”
–Veda Hlubinka-Cook
Neural networks learn over time, but in doing so, there is less and less visibility into how they learn or make decisions as they grow more complex and dense with data, explained Ira Cohen, CEO and chief data scientist of Anodot, an AI company that helps clients including Lyft and Waze monitor business analytics and operations, highlighting potential concerns. By the nature of what the company does, Anodot’s AI cannot get away with flagging problems or anomalies for clients without providing the why; it must give some explanation for the reasons the system warranted something as distinct or problematic, said Cohen.
When it comes to highly intricate neural networks, however, he suggested, “You don’t know what’s going on in the middle, really.”
This darkening impenetrability is one reason we hear the ubiquitous “black box” phrase thrown around AI accountability discussions so often. It’s not just because Google, Amazon, Facebook and others keep their algorithmic code close to the vest — it’s because as these systems advance they become increasingly difficult to decipher.
It’s no wonder Veda Hlubinka-Cook, software programmer and CEO of journalism tech startup Factful, put it in more mystical terms: “Deep learning neural networks are more alchemy than science at the moment.”
Cheryl Martin believes it could be possible to illuminate certain aspects of a neural network’s decision-making, but she suggests these labyrinthine systems just are not built to reveal details in a way that humans can perceive. The former NASA and University of Texas researcher is currently Chief Data Scientist at Alegion, an Austin-based firm that ensures data sets used in AI are accurate and standardized.
To explain a neural network’s decision, she said, would require a new layer in the design development process. “It would have to be something that the system designers build in as a separate capability.”
Cracking the Enigma
Some researchers are working to make neural networks less opaque though. Earlier this year, MIT’s Lincoln Laboratory Intelligence and Decision Technologies Group published research showing how transparency into a decision-making process might be built into neural networks.
And, the US military’s tech research and development agency, Defense Advanced Research Projects Agency, is in the early phases of its Explainable AI program (also known as XAI). The plan is for the DARPA project to produce a “toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems.”
[For more, see RedTail’s insider peek at a DARPA XAI research project underway at Oregon State University here.]
Sure, a lot of explainable AI processes may be devised for relatively mundane purposes: Why did the energy usage monitoring software recommend adjusting the fan power in Zone B? Why does this marketing tool want me to spend 12% of my ad budget on these three zip codes?
“In many senses we don’t know exactly when deep neural networks fail and why.”
– Bryan Reimer, MIT
When it comes to the Department of Defense, of course, the stakes are much higher. According to DARPA, “Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.” An opinion piece in the conservative-leaning Washington Times called the DARPA program “a common-sense comfort in a machine takeover world.”
People with values that differ from those of an autonomous US Army robot, however, might be less satisfied with simply knowing after the fact why it aimed a weapon where it did. Indeed, if any relevant level of algorithmic transparency for explainable AI is actually feasible, the dynamics at play in determining what requires explanation and why make for entirely separate debates.
For driverless vehicles — and the semi-autonomous ones that are already on the road — algorithmic transparency and explainability seem especially important. But MIT Research Scientist Bryan Reimer, who oversees autonomous vehicle technology research in MIT’s AgeLab, believes that while it may become technically feasible in the future, “In the context of cars [algorithmic transparency] largely is not really technically feasible today.”
Many AI ethics discussions as they relate to autonomous vehicles deal with questions related to crash response and what autonomous systems are designed to value. How might a car explain its decision if it swerved toward an 80-year-old person to avoid hitting a younger one pushing a stroller, for instance? While neural networks and deep learning systems might incorporate ethics in devising parameters for making such decisions, said Reimer, if they fail to perform as expected, they may not be as explainable. “In many senses we don’t know exactly when deep neural networks fail and why,” he said.
Still, some companies are developing approaches to vehicle transparency and explainable AI today. Earlier this year at an O’Reilly and Intel AI event in San Francisco, Accenture’s AI Innovation Lead Teresa Escrig stood before a slide projection featuring blue squares representing vehicles in relation to their position, speed, orientation, trajectories and other factors used in data models for “Transparent Decision Making.” The company, she said, is developing a user-friendly approach to “Decision History” which could reveal the steps that led to a vehicle’s decision. It looked something like mapping app directions.
When IP and Profit Motives Have a Seat at the Transparency Table
So, it’s safe to say there are lots of open-ended questions as to the technical feasibility of making algorithms less opaque and explainable. But there is far less recognition of an even more insurmountable hurdle: corporate intellectual property constraints. MIT’s Reimer put it simply, stating, “I don’t think we’ll see full transparency [in autonomous vehicles] because of the proprietary nature of these systems.”
Of course, the IP question pertains to all AI applications, not just autonomous vehicles. AI giants in Silicon Valley, Shanghai and elsewhere have the capital, resources and sheer industry stature to guide what transparency and explainable AI will look like as governments look to them for expertise. What incentives do these competing corporate entities have to enable transparency in ways that are meaningful or to enable monitoring for compliance enforcement in the future?
Smaller startups, which typically don’t like the idea of an AI oligarchy carving out rules that leave the little guy in the dust, might have even less incentive to reveal such information. “If all the inner workings of an AI-powered platform were publicly available, then intellectual property as a differentiator is gone,” wrote Rudina Seseri, founder and managing partner of early-stage AI startups venture capital firm Glasswing Ventures in a TechCrunch article earlier this year.
“When you show somebody your algorithm, it’s like letting someone into your inner sanctum,” said Cathy O’Neil, mathematician, data scientist and author of Weapons of Math Destruction, whose firm ORCAA builds custom tools to inspect the potential bias and harm facilitated by algorithmic systems. In other words, for AI tech firms, asking them to divulge information about how their algorithms function would be akin to asking elusive candy maker Mars to divulge the top-secret Snickers Bar recipe.
Continued Seseri, “If the IP had any value, the company would be finished soon after it hit ‘send.’ That’s why, generally, a push for those requirements favor incumbents that have big budgets and dominance in the market and would stifle innovation in the startup ecosystem.”
Enter the Partnership on Artificial Intelligence to Benefit People and Society. One might say the group favors the incumbents. In 2016, founding partners Amazon, DeepMind (owned by Google owner Alphabet Group), Facebook, IBM and Microsoft launched Partnership on AI. Most recently, China tech firm Baidu joined.

This non-profit organization founded by five of the most dominant corporate global tech behemoths was created to “work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field.” After two years, however, little detail on progress toward their promise to guide AI ethics principles has been discussed publicly. The organization did not respond to multiple inquiries to comment for this article.
Corporate-aligned entities such Partnership on AI are bound to wield tremendous power as governments rely on industry knowledge to help determine what algorithm transparency and explainable AI look like in practice. As AI advances far beyond the understanding of the laypeople running government, we might ask ourselves, who will hold the tech giants influencing the legislative and regulatory process accountable?
Whatever is happening behind the scenes, these efforts arouse skepticism. “On the algorithmic side, grandstanding by IBM and other tech giants around the idea of ‘explainable AI’ is nothing but virtue signaling that has no basis in reality,” argued Seseri in her In her TechCrunch commentary. She went on to imply that IBM would be unlikely to reveal explanations for how its Watson AI technology works.
“Nobody will ever willingly – especially IP protectors — let you into source code,” said O’Neil.
Making an “Explain Me” Button
Remember that domestic robot “Explain Me” button? Even if AI technologists one day enable meaningful transparency and explainability, product designers will have to figure out what that looks like in practice. How will the information be presented to us? Steve Umbrello, managing director at the Institute for Ethics and Emerging Technologies, focuses on design questions such as this in his research. Ultimately, he is not convinced that industry will have much desire to change their products to enable such transparency.
“[The corporations’] goal, whether we like it or not, is profitability,” he stated bluntly. “What reason does industry have to change the way to design their technology?” Decisions regarding transparency or explainable AI as they relate to product design, he suggested, could be subject to profit-driven motives. Even if corporations are incentivized, say through market forces and consumer demand to enable some form of transparency, he continued, “I find it really difficult to figure out a way to arrive at a stage in which industry does it for those genuine reasons.”
In addition to global trade organizations like the I-Triple-E, industry leaders like Watson-creator IBM have already picked up the ethical design mantle. AI product designers like IBM Distinguished Designer Adam Cutler are thinking seriously about this stuff from a user experience perspective. And like just about every other aspect related to implementation of the still-quite theoretical concept of AI explainability, there’s no standard blueprint.
The soft-spoken Cutler stood before a small group in a nondescript hotel conference room at the aforementioned AI conference in September. He had just introduced a new set of guidelines from IBM entitled Everyday Ethics for Artificial Intelligence. When asked what he thought of the IEEE’s explainable robot button idea, he acknowledged there is no common template for how AI-based products might explain decisions.
“There isn’t a good answer to say, ‘In all cases make sure that there is this type of control, or be able to flip over the screen and see the math or code behind the [JavaScript] it spits out,” he said.
Ideally the kind of design work Cutler and others are doing will evolve the concept of explainable AI to a more practical reality, transforming the too-technically dense explanations of AI engineers into something readily digestible by laypeople who just want to know why their kitchen robot suggested slowly stirring the risotto. This distinction, writes Rumman Chowdhury, responsible AI lead at Accenture, is a key difference between “explainable AI” and what she calls “understandable AI.”
“These guys are really infinitely smart, but they just don’t care about [fairness or transparency.”
– Cathy O’Neil, Weapons of Math Destruction
Indeed, researchers such as Andrew Selbst of Data and Society Research Institute and Julia Powles of Cornell Tech suggest flexibility is important. In “Meaningful Information and the Right to Explanation,” a 2017 paper they wrote evaluating how AI designers might execute on Europe’s privacy rules calling for AI explanations, they noted, “One might think that ‘meaningful information’ should include an explanation of the principal factors that led to a decision. But such a rigid rule may prevent beneficial uses of more complex ML systems such as neural nets, even if they can be usefully explained another way.”
Umbrello remains doubtful that tomorrow’s advanced machine learning and neural networks will be able to explain themselves, no matter how much academia, regulators or corporate execs debate the intricacies of AI transparency. “It’s almost naïve to think with the exponential development of technology that we’ll somehow have some sort of clairvoyance to understand the already black box nature of technology like algorithms,” he said. “To think in the future that we’ll understand them is incredibly disingenuous.”
Cathy O’Neil, on the other hand, believes technologists will be able to make transparency and explainable AI happen, but argued that academic and corporate pressures place emphasis on other goals. Computer science departments are far more concerned with ensuring their students get jobs at Facebook and Google, and thus primarily focused on questions of accuracy and speed, she said.
“I have talked to computer scientists who are like, ‘Don’t tell anybody I talked to you and am interested in fairness or transparency because I won’t get tenure because it’s not considered a serious subject in my department,” she said, concluding, “These guys are really infinitely smart, but they just don’t care about that.”
Hey Andrew – That comment at the end of the article is from Cathy O’Neil. It’s really her perspective. She isn’t the only person who suggested to me in my reporting that what’s valued in computer science and engineering education is often not necessarily related to ethics or enabling better transparency or fairness of systems. This is not to say there aren’t lots of people who are dedicated to solving these problems, though. Again, this was O’Neil’s comment from an interview I did with her for this story and a Q&A: https://redtailmedia.org/2018/10/29/redtail-talks-about-flipping-the-script-on-how-we-value-algorithims-with-the-weapons-of-math-destruction-author/ Thanks for reading!
I don’t think this is particularly fair, especially ending the article with such a strong statement. In the data science world, I hear conversations about fairness and transparency constantly. Papers are published (as in this example https://arxiv.org/abs/1607.06520) to find ways of automatically _removing_ the bias that human data puts in. It’s a hard problem, and there is still a lot to be done, but to characterize the industry as not caring about fairness or transparency without covering those sorts of efforts isn’t in itself a transparent or fair claim.
Pingback: Tinder Swipes Right On Programmatic; IAB Releases Advanced TV Attribution Guide | AdExchanger