What you’ll learn in this article:
- Intellectual property and security concerns drove the US software tech industry to push to prohibit governments from forcing corporations to divulge algorithms in the new US-Mexico-Canada Agreement, a trade-deal first.
- Advocates for algorithmic transparency fear the new trade provision could lock in the industry stance and create new blockades against obtaining explanations for AI decisions.
- Trade experts expect the algorithm non-disclosure language to make its way into future trade deals.
Earlier this year as negotiations for the updated NAFTA agreement sputtered, there was a lot of talk about sticking points involving things like Canadian dairy and Mexican auto parts. What we didn’t hear much about were algorithms, the complex technical processes that make artificial intelligence systems work. The media’s focus on more tangible issues was just fine for companies like Amazon, Facebook, Google and the tech industry lobby behind them as they worked to influence the treaty, ensuring that algorithm disclosure was off the trade table.
The new version of NAFTA, known officially as the United States–Mexico–Canada Agreement or USMCA, was signed September 30. Like other deals before it, it could serve as a template for future trade agreements. And a few mostly-ignored lines therein serve to illuminate the tensions emerging among the public’s right to explainable AI decisions, corporate intellectual property and government security.
In June 2017, the Computer and Communications Industry Association provided detailed policy goals for future trade agreements to the Office of the US Trade Representative which included limits on algorithm disclosure. The nearly-50-year-old trade group sent the office lengthy comments suggesting that trade partner countries be prohibited from compelling corporations to cough up information about their algorithms. The organization, which represents modern-day AI tech juggernauts including Amazon, Google, Facebook and Nvidia, reiterated this suggestion in October 2017, this time in reference specifically to the NAFTA re-negotiations. The CCIA also reaffirmed its presence through donations to US lawmakers sitting on key congressional subcommittees overseeing trade deals and intellectual property.
The powerful trade lobby got its wish. Article 19 of the USMCA prevents governments from forcing software owners to provide source code or an algorithm expressed in that source code in order to import or sell their software. The agreement included a carve-out for cases in which a judicial or regulatory body needs source code or algorithms in relation to a specific investigation or judicial proceeding.
Industry Team Work Makes the Deal Dream Work
“This is the effort to create new norms that will be promulgated through new trade agreements or through an expansion of the USMCA,” said Anupam Chander, a law professor at Georgetown Law, who said he believes the intent is to influence future trade agreements. Chander, who focuses on global tech regulation, confirmed that the trade deal has no bearing on how private corporations interact with one another.
“These efforts are all coordinated.”
— Burcu Kilic, Public Citizen
Other software industry groups joined in to limit algorithm exposure. The BSA Software Alliance, a tech industry trade association featuring members that are big in AI including IBM and Microsoft, lauded the USMCA in a press announcement, specifically highlighting its support for “digital trade provisions that…limit requirements to disclose proprietary source code and algorithms.” Meanwhile, the Software and Information Industry Association, whose website lists hundreds of members including Google and Facebook, pushed this January for a NAFTA agreement that included new rules on “non-disclosure requirements for source code and algorithms.”
“Among the quote-unquote trade associations [groups including SIIA, BSA, CCIA and others] we were all supportive of this provision and we were public about our support,” said Carl Schonander, senior director for international public policy at SIIA, regarding the limit on algorithm access in the USMCA.
Both the CCIA’s and the BSA’s political action committees have donated recently to legislators sitting on congressional committees overseeing trade and intellectual property. According to data from the Center for Responsive Politics, since 2016, senators on the Subcommittee on International Trade, Customs, and Global Competitiveness including Oregon Democrat Ron Wyden have received funds from both organizations. During the 2018 cycle, each group gave to House Democrats Zoe Lofgren – who represents California’s 19th district in the heart of Silicon Valley – and Hakeem Jeffries of New York; both won re-election November 6th and sit on the House Courts, Intellectual Property and the Internet Subcommittee. Missouri Republican Blain Luetkemeyer, who sits on the House Subcommittee on Agriculture, Energy and Trade also received donations from the two groups. He won his race, too. Congress is expected to consider legislation implementing the USMCA in 2019.
“These efforts are all coordinated,” said Burcu Kilic, legal and policy director on access to medicines, information and innovation at Public Citizen. The push to restrict access to algorithms, she continued, “doesn’t work for people, it doesn’t work for users, it doesn’t work for consumers.”
Neither the BSA nor the US Trade Representative’s Office responded to multiple inquiries to comment for this story. The CCIA said it could not make a spokesperson available.
Why the Tension over Algorithm Access?
Demands for ethical artificial intelligence that takes into consideration the impact on humans and society often focus on the concept of algorithmic transparency. In other words, AI Ethics advocates want algorithms – the steps and processes that enable AI – to be made visible enough to inspect and understand them, particularly when they lead to decisions that have questionable or negative consequences such as a job application denial or a driverless vehicle accident. (Read RedTail’s feature about other obstacles to algorithmic transparency.)
“There are clearly situations when consumers and others may want to understand the background of an algorithm so that you can avoid what in the United States is known as disparate impact on protected groups,” said the SIIA’s Schonander. “We are supportive of companies, upon request, providing a rough narrative of how their algorithms work, but we don’t think it’s fair or correct to ask companies to provide the underlying source code to make those algorithms work.”
Hand-in-hand with algorithmic transparency is the idea of explainable AI, or AI that can explain the decisions it makes in a meaningful way. Perhaps one of the most significant examples of this demand for explainable AI came in Europe’s GDPR privacy law which calls for a right to explanation regarding AI decisions affecting EU citizens (more on that in a bit).
But tension is already mounting. Some proponents of source code and algorithmic openness argue that the USMCA’s algorithm disclosure requirement prohibition locks in the tech industry’s position that algorithms are proprietary, thus creating new blockades against obtaining explanations for AI decisions that affect people.
In June, Google, a member of the CCIA and SIIA, put forth a set of principles it said would guide its AI technology development, promising to “be accountable” by allowing for “relevant explanations” for AI and incorporating “appropriate transparency” over data use. Yet if the industry’s apparent influence over the trade agreement’s algorithm provision is any indication, competitive business interests could take precedence over meaningful accountability and transparency.
What’s one consumer advocate’s right to openness is one tech firm counsel’s intellectual property. “Generally speaking, transparency and explainability is something that we view as an ethical consideration,” said Maya Medeiros, partner and intellectual property lawyer with Canadian law firm Norton Rose Fulbright, who spoke with RedTail in general terms rather than in direct relation to the USMCA.
“It’s not clear to me that the industry all has one view of this issue.”
— Anupam Chander, Georgetown Law
“On the flip side,” said Medeiros, also a computer science and math scholar, “the company who is developing the algorithm would want to rely on keeping it secret as a form of IP protection. When I talk about AI and ethics and transparency and explainability, I always talk about the tension with IP.” She explained that copyrights and patents on source code typically do not protect algorithms, which she described as “a more powerful set of knowledge” than source code.
Throw open-source software philosophies into the mix and the issue grows murkier. “Historically there is a lot of industry who believe in open-source,” said Georgetown’s Chander, who noted that some pro-open-source engineers support algorithmic transparency. “It’s not clear to me that the industry all has one view of this issue.”
Cyber Competition Meets Cyberespionage (and the Pentagon)
As an advocate for public rights to medicine, technology and information, Public Citizen’s Kilic sees more than intellectual property ownership concerns in trade language like the algorithm disclosure provision in the USMCA. “They will tell you, ‘Oh no, this has nothing to do with public policy, it’s about competitive advantage. Whoever controls the data controls the future, right? So they want to control the future.”
So, what do governments want with a software company algorithm anyway? There are all sorts of reasons why a government might want to compel a corporation to reveal source code or algorithms. For instance, in 2016, leading up to the California Air Resources Board’s settlement with Volkswagen over surreptitious devices in its diesel-fueled vehicles, the agency demanded that the German carmaker hand over software code controlling emissions systems.
Trade protection and competitive advantage is one thing. But in today’s cyberwarfare battlefield, software originating in foreign countries has the potential to pose a threat to computer systems and infrastructure. Governments – most notably China – reportedly have subjected some of the biggest US players in AI tech development including Microsoft and IBM to source code reviews. Some companies have been willing to kowtow to these demands because they want to do business in a massive and potentially lucrative market like China.
In 2016, the California Air Resources Board demanded that Volkswagen hand over emissions system software code.
But it’s no surprise US tech firms would prefer that this sort of poking about doesn’t happen, which is a reason why they want to ensure that source code reviews don’t expand to include even-more invasive algorithm inspections, or better yet, that these inspections are not allowed at all.
In a statement sent to the US Trade Representative in 2017, the CCIA argued that, if faced with source code inspections and demands for access to encrypted information, companies “will be required to alter global platforms or design region-specific devices, or face fines and shutdowns for noncompliance.” The organization suggested, “Companies that might have otherwise expanded to these markets will likely find the anti-encryption or facilitated access requirements to be barriers to entry.”
Schonander said the SIIA advocated against the Chinese government’s requirements for source code reviews as a condition of doing business in China because they were inappropriate. “We consider source code to be a form of intellectual property just like algorithms are.”
As of earlier this year, there’s something else some tech firms fear: the barrier to doing business with the Pentagon. With the US government vigilant against hacking and cyberespionage by foreign entities, the Defense Department is clamping down on source code inspections. The National Defense Authorization Act of 2019 signed in August prevents the Defense Department from using software from companies that have allowed select foreign governments to review source code of the product. The Secretary of Defense is tasked with determining which countries will be on that list by early next year.
The GDPR Effect?
And then there’s GDPR. Europe’s recently-established privacy regime, the General Data Protection Regulation, calls for a citizen’s right to an explanation for AI decisions. While the provision remains a point of controversy in terms of what it means and how it would be applied, this should not be overlooked as possible inspiration for the software industry’s desire to address algorithm disclosure head-on in trade agreements.
Medeiros, the Canada law firm partner, suggested that language in trade agreements related to algorithm disclosure could be a means of providing future clarity in the wake of the EU regulation. “This kind of language helps for businesses to manage the risk and give some certainty,” she said.
It’s worth noting that trade agreements often mirror previous ones. The defunct Trans-Pacific Partnership included near-identical language regarding source code to what’s in the USMCA, for example (see figure above). So, there is a likelihood that what’s in the USMCA related to source code and algorithms could serve as templates for future trade deals.
All in all, the dynamics of intellectual property, international security and growing demands for citizen rights to explainable AI create a situation that is anything but black and white. It remains to be seen whether the algorithm concealment lobby, be they the forces of industry or government security, will have more sway than advocates for transparency in this power struggle, or whether a means of compromise can be found.
Note: This story originally incorrectly stated that the Comprehensive and Progressive Agreement for Trans-Pacific Partnership also included language related to algorithm disclosure.