When Ben Green got his start working for cities including Boston and Memphis as a data scientist, the emerging smart city space was full of promise. He wasn’t quite prepared for the challenges he’d face, but not just with data hiccups or apps that didn’t do what they were supposed to do. Green gradually came to realize that the so-called smart city revolution — with all the hype around data analysis for optimized government services, utopian city mobility and tech-streamlined citizen engagement — wasn’t all it was cracked up to be.
“Whether we recognize it or not, the technologies we implement in cities today will play a significant role in defining the social contract of the next century,” writes Green in his new book, Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future.
The book offers thoughtful insight and analysis of the issues surrounding smart city technologies, illuminated through recent real-world examples of municipal tech gone awry and municipal tech that gets it right. In this way, it is both prescient and timely, capturing a pivotal moment in democratic society during which the technology-related decisions made today by the hometown or the metropolis will affect us for decades or centuries to come.
You may not realize it, but chances are your city government has implemented emerging tech such as traffic signal sensors, optimized water safety systems or even predictive policing platforms.
RedTail’s Kate Kaye chatted with Green about a few of the important threads stitched throughout his book, and some recent city tech news.
RedTail: First, let’s talk about something newsworthy. About a year ago, the New York City Mayor’s Office announced its Automated Decision Systems Task Force. The goal is to develop a process to ensure automated decision systems used by the city are equitable, fair and transparent. Last week, members of the task force said they still have not been provided even basic information about AI used by city agencies.
So, when it was introduced, this task force was praised as this innovative way that cities can start incorporating ethical considerations when they’re deciding whether or not to use certain technologies or how to implement them. Now, a year later people on the task force are saying, “Um, hello, this is not working!” This whole thing’s about transparency and the city won’t even be transparent about the technology it uses. What do you think about it?
Ben Green: If the city or various departments don’t want to give out this information they don’t have to, and this is often a challenge with these transparency initiatives, that people in government view transparency at minimum a hassle and at worst a threat.
We saw this with the open data movement where it took many years for open data to take hold and be recognized as something that should be done, and often you had city departments who were very wary of releasing that type of information because they were worried about the external scrutiny.
People in government view transparency at minimum a hassle and at worst a threat.
That is a major barrier to what’s going on. Ethics is about creating conditions for ethical outcomes. And so here, just as with [Google’s ethics board], it’s not just about having a task force; it’s about actually building the collective public power to ensure that transparency is achieved.
RT: In part, the promise of smart city technology is that data collection and analysis will help governments understand the needs of citizens and be able to respond more appropriately to them. Are cities considering data management and ownership questions when they sign contracts with tech vendors?
BG: I think they’re starting to now. They definitely are worrying about these types of things. I think in the past they often weren’t and that was a place where maybe there was a disconnect where cities would work with the tech vendors and be really excited about the types of services and products they were producing and not actually recognizing what was going on under the hood in terms of data collection and often the ability to create significant profit from that data.
And it’s the same story with open data generally. It was around 2015/2016 that cities started to realize that all of the data they were releasing as open data came with significant privacy risks.
There was this huge push. There was this idea that working with data in cities meant doing open data, and then there were cases of data that were released and that ended up including a lot of sensitive information either directly or via analysis and inference. That was really the shift that I think started getting cities working on addressing and really thinking deeply about…the types of privacy issues at stake that go beyond your very traditional, simple [understanding that] if it includes your name, SSN, it’s sensitive and if not, it’s totally anonymous data.
RT: It’s not clear to me that a lot of people in government are at that stage of nuanced understanding about data privacy yet.
BG: It’s tough. There’s a long process of that information trickling down to a broader understanding in part because it is kind of complex. If you don’t have people who are used to working with this type of data it can seem anonymous or it can seem like you just have to do these one or two things and then it’s anonymous.
[Google-affiliated] Sidewalk Labs in particular is a company that I’ve seen really peddle the language around anonymous data. I’m sure they are involved in drawing these lines, telling the story of where these lines are for city officials, and I would argue that they are defining anonymous information far too broadly and they’re sort of doing that in a way that is actually clearly going against all of the understanding and science that has been done on this topic, but in a way that allows them to use a lot more of your data.
There’s a lot of education that still needs to go on for city officials around these types of things. They should be wary of what types of data they are collecting and storing, they should be wary of what types of data they are allowing tech companies to collect and store. While there are a number of cities that are pretty thoughtful about this and do have internal/external expertise, you’re right, there are a lot of cities that are figuring it out and figuring out how to engage with a tech company like Google. It can be tough to have a technical debate with engineers from Google who come in with really fancy software and they talk about how thoughtful they’ve been with anonymization and it requires pretty advanced understanding of the topic to be able to see the flaws and push back on that type of thing.
It can be tough to have a technical debate with engineers from Google.
RT: Despite the claims of some cities that they engage their citizens when considering new technologies that affect policy, I’m not convinced this really happens in a meaningful way all the time, even with the best intentions. Are public meetings really enough, for instance? How are discussions around community engagement evolving? Everyday people affected by emerging technologies aren’t always totally engaged in these issues.
BG: I think there are cities that have been making efforts on this. Seattle, as part of their surveillance oversight ordinance has in just the last couple months started the process of holding public meetings about surveillance technology. Chicago did a lot of work with this. One of the things Chicago has talked about for their array-of-things project was just how much work you need to do to educate a lot of people about what’s even going on, the basics of the technology, the basics of the project and how it works.
The issues with public meetings with technology are the same type of issues that have always been potential issues with public meetings. It’s exacerbated in a space where you have significant gaps between the expertise needed to really understand and have an informed debate about this technology.
The advantage though is that there are a lot of folks in these public meetings — a lot of people in the tech industry, a lot of people who are academics, advocates for privacy and civil liberties who are able to come in and provide a lot of expertise.
Ultimately, what’s needed is ensuring that the political and policy processes on the back-end in city governments are robust from lobbying and regulatory capture so that they are representing the public interest.
Ultimately, what’s needed is…ensuring that the political and policy processes on the back-end in city governments are themselves more robust from lobbying and regulatory capture so that they are representing the public interest, to make sure that they’re not just going along with the tech companies…but they genuinely have the ability and resources to make these decisions in the public interest without simply worrying about where the funding can come from; without worrying about making deals with tech companies simply because they’re the only ones that have the resources to provide these types of services; and more broadly, shifting the narrative away from smart utopianism.
RT: I was really interested to read your discussion of tech closure in the book. This is the process by which we come to a consensus about certain technologies, but not always when the right solution is found. Rather, tech closure happens when a social group perceives a solution, possibly obscuring alternatives.
Are we at a point in our culture that this tendency toward tech closure is even more profound? It seems like in a lot of ways people are trained to believe that any new technology must be better than what came before it. We call it “innovation” because that connotes improvement. It’s what drives people to buy the latest iPhone. I’m always laughing at these TV ads for cars that just talk about “tech.” This new Acura or BMW, or whatever, has awesome new “tech” and it’s as if that’s the feature, as if cars haven’t been built with technology all along. It seems like the closure in this case is that technology is good no matter what.
BG: We should care about innovation; innovation is about finding creative ways to address problems. The danger is that innovation has come to mean simply applying new technology. And tech closure is typically used in a slightly more narrow way where there are competing technologies…or competing forms of technology and the closure is on how that problem ends up being addressed, what form of the technology wins out and becomes taken for granted.
The danger with something like the smart city and other places is the step before that, that before we’re even thinking about technology there’s this idea that we need technology in the first place. Oftentimes there are cases in which technology can provide a reasonable or potentially the desirable solution, but it’s certainly not always the case. There’s a real danger of privileging the technological solution over every other alternative which might be a policy reform, it might be the structural reform, it might be any number of things that can often be much more wide-ranging and impactful.
There’s a real danger of privileging the technological solution over every other alternative.
The historical case [in which I write about closure in the book] is thinking about cars. This idea that, we have transportation issues, and what we need is just to have more roads. If we have more infrastructure for cars to drive on, then we will have a better transportation system.
Today, we have the danger of a similar sort of closure around technology generally with self-driving cars. There’s a really nice parallel to what happened 80 or 100 years ago – that what we need now is our existing infrastructure with this new technology on top of it.
Actually what a lot of cities have been moving toward and what is actually needed is a broader reform in urban development and how we’re thinking about public spaces in a way that moves away from a car-centric model entirely. The danger of this innovation as technology mindset is we become fixated on the technological intervention which often distracts from or sort of just takes over any discussion of any alternative reforms and all we think about is the tech solution.
RT: I wonder if your thinking on smart cities has evolved. Did you start out as more of a proponent of using technology and data analytics to improve city infrastructure and services, and if so, what was it that steered you to where you are today?
BG: It absolutely has. I came in as someone who was much more excited…I did my undergrad training in math and physics and so I was really interested in the opportunities to use data to improve city government. So my initial forays into this space were really as a data scientist, working with city governments to see how can we use this technology to solve problems. So, I was definitely much more optimistic about that, I was more optimistic about the civic engagement apps and those sort of things.
As I engaged with this stuff I increasingly saw the broader policy and structural issues that made data hard to use. In many cases the challenge was not the technology, that we needed more sophisticated algorithms or something like that. The problem was, “Oh, we have issues with data sharing,” or “Oh, we have issues with having really low quality data.”
I haven’t completely abandoned the idea that there’s value to this technology but have gained a much more sophisticated understanding of what the dangers are and what it takes for it to have a positive impact.
I became more interested in those sorts of policy questions about how do you actually use data effectively. It’s not a simple question of, “Well, we have these machine learning algorithms, so we can solve all of our problems.”
Because I was working directly with the city governments, I saw how many challenges and how much work goes into actually making effective uses of this data…. So, over time I absolutely became more skeptical by seeing the disconnects between the ways technologies were being talked about, and seeing a lot of the ways the technology was being used to cover up or obscure broader political debates or broader political issues or disagreements.
I haven’t completely abandoned the idea that there’s value to this technology but have gained, I think, a much more sophisticated understanding of what the dangers are, where it can go wrong and what it actually takes for it to have a positive impact.