New York City is hiring.
The city earlier this month unveiled a description of its new Algorithms Management Policy Officer role. But some worry the creation of a procedural position forced to maneuver within an arguably flawed bureaucratic structure only perpetuates the city’s imperfect approach to developing policy for government AI use.
“It appears this role will simply provide a rubber stamp to current and future use of [Automated Decision Systems] without evaluating or even attempting to address known concerns with ADS currently used by city agencies,” Rashida Richardson, director of policy research at the AI Now Institute at NYU and a critic of the city’s task force, told RedTail.
“This role is unique in urban governance and is intended to help provide protocols and information about the systems and tools City agencies use to make decisions,” the city said in a statement.
The algorithm officer will report to the Mayor’s Office of Operations, and will be in charge of establishing governing principles to guide ethical use of algorithmic tech by city agencies. That includes coming up with a framework to assess the systems, considering “their complexity, the benefits, impact, and any potential risk of harm.”
This is what the city’s task force initiative was supposed to do in the first place when it convened AI ethics experts and city staffers two years ago with the intention of developing policy on algorithmic technologies. Instead, they were unable to obtain much detail about which automated systems city agencies already use. In fact, even coming up with an accepted definition for Automated Decision System proved a challenge.
The job description “suggests that the city sees issues of algorithmic discrimination as simply bureaucratic issues rather than consequences of problematic agency practices and policies.”
– Rashida Richardson, AI Now Institute at NYU
So what does an algorithm policy officer look like? According to the job description, this person has “outstanding, proven analytical skills” and a “thorough understanding of AI, data analysis, predictive analytics, and other related methods and practices.” The city said the officer will engage regularly with the public.
The position also calls for someone with a “demonstrated ability to develop and effectively implement policy guidelines to govern the use of systems across disparate entities.”
Few governments or corporations have put policies and procedures in place to guide their use of AI. There may not be many people with systems policy experience as it relates to algorithmic technologies, particularly when it comes to proving that policy guidelines have been effectively implemented.
The job description “further illustrates the city’s lack of understanding of the issue,” said Richardson. It “suggests that the city sees issues of algorithmic discrimination, bias, oversight, and accountability as simply bureaucratic issues that can be resolved by creating more procedures within government, rather than consequences of problematic agency practices and policies, in addition to larger societal problems.”
Algorithmic tech can be used by city governments to determine how and where inmates are housed, to evaluate the risk of child neglect by welfare agencies, for student placement in public schools and even seemingly mundane purposes such as prioritizing construction site inspections. As has been well reported, automated technologies sometimes spawn unfair decisions since these systems are often built using already-biased historical data.
In the end, New York City’s automated decision systems task force report called for creation of an “organizational structure” within city government to oversee policy guidance and automated systems management. Mayor Bill de Blasio made a companion announcement creating the new algorithm officer role, amid criticism for the city’s task force process and overall lack of transparency.
The new position “is a centralized resource for agencies, helping provide information about the development, responsible use, and assessment of such tools for the purposes of addressing the risk of inadvertent harm that can accompany them,” noted the city’s statement.
While most problematic city government projects become distant memories or fodder for staffer gossip at the bar, this one prompted a thorough chronicling in the form of a shadow report endorsed by the ACLU, NAACP Legal Defense and Educational Fund, and more than 20 legal, human rights and tech justice groups. Richardson edited the report.
An entire task force of people was hamstrung, at least in part, by the reluctance of city agencies — i.e. “disparate entities” — to share information about how they already use technologies that can make life-altering decisions. Many eyes will be on New York City’s algorithm officer, watching to see whether this person will be able to navigate the bureaucratic pitfalls of city government to produce meaningful AI policy.