What you’ll learn from this article:
- OSU is one of several institutions selected to conduct DARPA-funded research as part of its “XAI” project intended to enable explainable AI systems.
- Researcher Jed Irvine is a nature lover who’s not too keen on gaming, but he’s developing a user interface for a neural network-based military game that he hopes can expose some of its thinking to humans in an understandable way.
Jed Irvine is an unlikely AI technologist. The nature-loving bird enthusiast (and sometimes-rock drummer) questions a lot about the future impact of the deep learning technologies he works with as Senior Faculty Research Assistant at Oregon State University. Yet, despite the fact he decidedly is “not a gamer,” Irvine is part of a small team in the school’s Artificial Intelligence and Robotics department who aim to transform the incomprehensible gobbledygook of machine learning language into human language. And they’re using a neural network-based military-themed strategy game to get there.
Irvine and seven others at OSU are among researchers from a variety of institutions selected by DARPA to develop ways of making AI systems explainable. The project on explainable AI – known as XAI – is one of fifteen distinct projects he’s worked on which have been funded either by DARPA or the National Science Foundation. The US Defense Department’s DARPA (Defense Advanced Research Projects Agency), is the military’s petri dish of emerging tech research, where technologies including the Internet and GPS originated under the auspices of the agency’s earlier iteration, ARPA.
Irvine, the software developer who created the user interface associated with the Human Computer Interface and Machine Learning project, explained it this way: “Whatever approach you take, whether it’s deep learning or whatever comes next, the question is, can you instrument it – meaning, can you rig your system – to somehow be able to capture enough information that can then be presented to the user meaningfully to expose some of its thinking in an understandable way.”
He works with the team under XAI project lead Alan Fern, associate director of the Collaborative Robotics and Intelligent Systems Institute at OSU and project co-lead Margaret Burnett, distinguished professor in OSU’s School of Electrical Engineering and Computer Science. Thomas Dietterich, a pioneer in the machine learning field and director of Intelligent Systems at the university is consulting on the effort.
A little over a year into the four-year project, Irvine said he thinks it is possible for complex AI to be explainable AI.
“I do have a 50/50 split of skepticism of technology and embracing of it.”
“I’m like an interesting person because I work at these problems, do the best job I can, but I do have a 50/50 split of skepticism of technology and embracing of it,” said Irvine, acknowledging that bioinformatics and the use of machine learning for ecological purposes is where is passion lies.
“I much prefer to work on the nature things…. I wish I could be paid to walk around in fields,” he said.
Fort or City, Friend or Foe?
At first glance, the game appears rudimentary. Picture four quadrants representing possible targets such as enemy forts or friendly cities. The game’s system weighs risks and rewards based on factors such as the health and value of the target, potential damage that could be incurred upon striking it, whether it is a fort or a city, ally or adversary. “It’s a game that you would never have fun playing,” admitted Irvine.
While sometimes the computer’s deep learning system decides to strike in a way that seems logical to human observers, at times it makes a counterintuitive decision based on patterns it recognizes and predictive analysis of future impacts a human may not perceive.

Why the war theme? The military scenario is “a kind of world where decisions have to be made based on circumstances, and the essence of those moves can be stripped down and understood pretty well,” said Irvine, who had the gaming system on display at OSU’s The Promise and the Peril of Artificial Intelligence and Robotics event held on October 23 at the university’s Corvallis campus about an hour-and-a-half drive south of Portland.
Here’s where the explainable part comes in. Today, when researchers ask the system, after it has calculated an array of probabilities, why it chose to strike where it did, the answer might be reflected in a number like .05932. “And there’s just no visibility into that,” said Irvine.
The OSU team is working to turn that number into something humans can comprehend and put to use. That involves incorporating capabilities that most deep learning and neural networks do not have in their core design today.
“One way to think of it, in a very abstract way, is leaving breadcrumbs, so that somebody in the future can find their way back to a decision,” said Irvine.
Visualizing the Algorithm’s Process
One approach he and the team are testing is a means of “exposing saliency,” or highlighting the parts of the system that the algorithm was looking at or thinking about, so to speak, as it made a decision. The saliency component of the system visualizes the area the machine was focused on most by recognizing areas that stand out.
“Each one of these squares is like a view of the game, highlighting some piece of the game. The brighter the pixels are, the more interest it’s had there.”
“One way to think of it, in a very abstract way, is leaving breadcrumbs, so that somebody in the future can find their way back to a decision.”
As AI becomes part of our everyday lives, there will be any number of reasons why we’ll want these technologies to explain themselves, from the mundane to the life-altering, out of sheer curiosity or for serious legal purposes. [Check out RedTail’s feature on the obstacles to explainable AI and transparent algorithms here.]
Perhaps not surprising, in the nearer future, Irvine expects the explanation process he’s helping develop to be used in a military situation, such as when someone is monitoring autonomous aircraft.
Autonomous weapons “scare me a lot,” said Irvine. [Later, after this article was first published, he elaborated: “What I’m more generally saying is that any AI system that can’t be understood well enough to be properly debugged brings significant risks with its use in any field. So, while I personally wouldn’t choose to use my skills to develop, say, an autonomous weapons system, I’m happy to help solve the problem of making sure they (or any other technology that’s deployed) don’t misbehave in mysterious ways.”]
Not all engineers are comfortable with the idea that technologies they build could be used for military purposes. In fact, Google earlier this year said it would pull back on a project with the Pentagon after engineers rallied to protest the use of their AI work for military use.
So, how does a nature-loving techno-skeptic like Irvine grapple with the notion that his work could be used for military applications?
He suggested that the many technologies funded by DARPA – in his experience one of the few entities with the resources to support the sort of research he does – eventually do have all sorts of non-defense-related uses as the technologies enter the public domain.
He concluded, “I can’t imagine the world being a better place without the ability for machine learning systems to explain themselves.”
Note: A clarification was added to this story after it was originally published to make clear that Irvine is the software developer who created the user interface associated with the DARPA/OSU project, rather than developing the technology behind it. Also, information about OSU faculty leading the team, along with Irvine’s elaboration on his comment regarding autonomous weapons has been added.
0 comments on “This Oregon Nature Lover Helped Build a Defense Department-funded Military Game that Could Help AI Explain Itself”