What you’ll learn in this article:
- In May 2018, NYC Mayor Bill de Blasio touted the Automated Decision Systems Task Force as an important step toward greater transparency and equity in the city’s use of algorithmic systems.
- Task Force members say they still have not been provided even basic information about AI used by city agencies nearly a year into the process.
- The chair of the task force indicated he does not plan to provide a list of automated systems used by the city to members.
- Concerned watchdogs also lament a lack of transparency about the task force proceedings.
One thing is clear about New York City’s AI transparency and equity initiative: it’s been anything but transparent.
When the New York Mayor’s Office announced its Automated Decision Systems Task Force last year, it did so with a self-congratulatory pat on the back for being the first city in the nation to take such a step.
“The establishment of the Automated Decision Systems Task Force is an important first step towards greater transparency and equity in our use of technology,” said NYC Mayor Bill de Blasio in a May 2018 press release about the task force. The group was charged with developing recommended procedures for reviewing and assessing the algorithmic tools used by the city.
Now, nearly a year into the process, task force members and others are frustrated with a lack of information about which autonomous systems the city employs. Meanwhile, concerned observers lament a process that itself has been opaque.
“We cannot trust the outcome of this task force without transparency on the process,” said Noel Hidalgo, executive director of open civic technology group BetaNYC, in testimony given during a New York City Technology Committee hearing about task force progress held on April 4. He argued that without meeting notes, timelines and other information about the task force deliberations, there could be no meaningful public engagement.
Hidalgo was among a stream of people including task force members and advocates for open city government tech and human rights who gave testimony at Thursday’s hearing. (More information and video of the hearing is available here.)
Members in the Dark
But it’s not just a lack of transparency on the process that’s a problem. Task force members themselves say they are in the dark. Members of the group said despite making regular requests, they have no information from city agencies about which specific autonomous systems are employed currently by the city.
“To date, the city has not identified even a single system,” said Solon Barocas, assistant professor in Cornell’s Department of Information Science, a Microsoft researcher and task force member. “Task Force members need to know about relevant systems used by the city to provide meaningful recommendations.”
“Task Force members need to know about relevant systems used by the city to provide meaningful recommendations.”
– Solon Barocas, Cornell Department of Information Science
Even before several witnesses testified about concerns regarding the task force, Technology Committee Chairman Peter Koo asked Jeff Thamkittikasem, chair of the task force, if the city could provide a list of the systems in question. “We haven’t focused on reviewing any specific systems,” Thamkittikasem responded.
NYC’s cloudy and frustrated process is reflective of broader tensions emerging as corporations and governments tackle the tough task of devising practical AI ethics guidance. The NYC hearing came in the wake of backlash against Google’s failed attempt at forming an AI ethics board. The company was criticized not only for its board member choices but for establishing a group with little if any actual authority over company decisions. Within days of forming the group, hundreds of employees and ethics advocates called for removal of a controversial group member, which led Google to shutter the initiative all together.
Whatever comes out of the NYC Autonomous Decision Systems (ADS) task force, it appears its recommendations will be advisory rather than legally binding in nature. However, at this late stage, the task force appears to be bogged down in preliminary discussions.
“To be clear the ADS task force is not going to produce a list of algorithms in use by the city,” Thamkittikasem said during the hearing, explaining that the plan instead is to develop and issue recommendations for criteria to allow agencies to do city-wide assessments. Even defining what constitutes as an ADS has taken longer than expected, he added.
The process leading to the establishment of the task force itself appears to have been contentious. The December 2017 law which created the task force originated as a bill requiring agencies using automated systems to make source code available to the public and simulate how the systems would operate in real-world scenarios. Though such a requirement mirrored general demands for algorithmic transparency from throughout the AI ethics community, it illuminated the tensions between that goal and corporate intellectual property and government security concerns.
A Call for City Agency Context
During the hearing, some requesting a list of AI technologies used by the city did acknowledge privacy and security considerations that would need to be addressed before making such a list available. Still, proponents of increased system transparency suggested that information about which technologies are being used and for what purposes, would provide the context necessary to crafting meaningful advice to ensure that the systems are developed and employed in a fair manner.
Janet Haven, Data and Society Executive Director and a task force observer (left) argued that without specific information about which systems are in use, for what purposes and by which agencies, the initiative would have limited impact. “Social context is essential to defining fair and just outcomes. This city is understood to be using ADS in such diverse contexts as housing, education, child services and criminal justice. The very idea of a fair or just outcome is impossible to define or debate without reference to the social context of the system,” she said.
“The very idea of a fair or just outcome is impossible to define or debate without reference to the social context of the system.”
– Janet Haven, Data and Society
After completing his prepared testimony, Barocas stressed that even some basic information would help. “At a minimum it would be very helpful to have even basic information about relevant systems. I understand the challenge here about settling on a definition and the challenge of figuring out the scope, but the lack of any examples at all or even identifying — not specific details — but the mere existence of relevant systems has really impeded any meaningful conversation about these kinds of systems.”
The task force is intended to deliver a report by December 2019, and plans to begin community engagement efforts. It remains to be seen whether the report will provide recommendations that directly address the use of specific systems currently employed by New York, or merely will offer generalized guidance.