What you’ll learn in this article:
- Ahead of the curve, a British Columbia safety inspection organization has implemented ethical principles during development of new machine learning technology.
- Technical Safety BC wants to make sure its human workers remain an essential part of the decision-making process.
- The organization is filling gaps in data gauging safety hazards to ensure it does not favor certain geographic areas, populations or powerful clients.
- Technical Safety BC is working with one of a nascent sector of tech consultancies launched to facilitate AI Ethics-by-design and reduce AI system unfairness.
Cities have launched task forces and parliaments produced extensive reports, while other government bodies have held hearings and convened expert groups. When it comes to devising ethical standards or assessing the impact of AI-based tools, these efforts reflect the current state of AI ethics in the realm of government: a lot of talk, but not much action.
In British Columbia, Canada, however, one independent organization mandated by the government there — Technical Safety BC — has begun implementing ways to ensure that the AI technology it uses is fair and accountable. And, nearly two years after launching the effort, it has moved into its second phase.
Picture 250 hard hat-clad safety officers inspecting electrical, gas and refrigeration installations and upgrades at large commercial construction sites and small mom-and-pop establishments throughout the Western Canadian province. Early last year, Technical Safety BC began the process of developing an AI-based system to augment safety officers’ decisions regarding which building sites to inspect or how to score the hazard level of a permit site.
Bringing in new machine learning technology wasn’t particularly novel. What was innovative was an approach which incorporated ethical principles during the design phase. Technical Safety BC, an independent permitting and licensing entity mandated by the Province of British Columbia in 2004, had help in its mission from a new kind of tech consultancy, too. It brought in Generation R, a tiny firm sprung from a University of British Columbia project evaluating the social and ethical implications of AI and robotics.
Technical Safety BC’s ethics-by-design approach would help ensure that values such as transparency and respect for human users and their decisions were built into the development of the system. As society grapples with the likelihood that AI will kill some jobs, it will become increasingly important for governments and corporations to ensure employees are not alienated or threatened when new AI-based technologies and tools are introduced in the workplace.
When Gen R began its work with Technical Safety BC’s executive and data science team at the start of 2017, they set out to understand the values of stakeholders including safety officers, business owners and clients, and the community served by the organization. The AI ethics firm helped determine ways to account for false negatives, or times when the system fails to spot a potential hazard, and where blame should be placed in such cases.
Today, Technical Safety BC inspectors carry digital tablets loaded with the new machine learning software that informs their decisions by predicting the probability of hazards such as loose wiring, electrocution or fire at permit sites. It’s the machine learning algorithm powering this risk-evaluation system that Gen R’s ethics work helped guide.
In the past, safety officers made decisions the old fashioned way: through a combination of experience, knowledge, expertise and sometimes plain ol’ gut instinct. Perhaps they gave construction sites in certain neighborhoods more attention, while those in other areas were ignored, or attracted patterns of high-risk assessments.
More Comprehensive Data for Fairer Inspections
Because there are not enough inspectors to evaluate every work site, the organization sought a smarter means of determining which locations warranted inspection. For instance, while assessing high-risk work sites is important, a sole focus on high risk areas ultimately could produce a biased data set, suggested Soyean Kim, leader of research and analytics at Technical Safety BC. She said that training the system based on a narrowly-selected arena of information could prevent it from pinpointing other less-obvious areas of risk.
To help assuage this problem, Technical Safety BC aims to build larger data pools representing otherwise under-represented inspection areas. “That’s something we’re actively monitoring in an automated fashion,” said Kim. She explained that the organization is filling in data gaps through random sampling across a spectrum of permit types in order to highlight clusters that could warrant more attention from safety officers.
“At this stage we know for sure the data is not perfect.”
— Soyean Kim
The end goal is to generate additional information reflecting sites in ignored areas or belonging to minority populations – say, a laundromat or restaurant in a working class, ethnic neighborhood with just one or two permits. Until now, bigger clients with more prominent projects and thousands of permits – for example, Whistler Blackcomb ski resort — may have gotten more attention. Yet, as it aims to limit neglect of certain geographic areas or permit holder groups, Technical Safety BC also must account for the possibility of biased risk scoring that leads to unfairly high hazard patterns for minority populations.
“At this stage we know for sure the data is not perfect,” Kim said.
Respecting the Autonomy of the Working Person
We often think of artificial intelligence as fully-automated, machine-only processes; this perception inspires AI ethics discussions that center on concepts like robot takeovers, fully-driverless vehicles or singularity. However, the reality is that much of the AI-based tech used today and in the future will involve human decisions that are supplemented by machine learning and predictive algorithms, rather than directed by them. With this in mind, organizations like Technical Safety BC aim to make sure humans remain an essential part of the deciding process.
“We have to give some cushion and provide opportunities for safety officers to exercise their own discretion,” said Kim.
Shalaleh Rismani, chief innovation officer and system analyst at Gen R, explained that the safety organization is concerned about machine learning-based directives undermining the knowledge and autonomy of safety officers. The challenge is to balance that concern with the mission to leverage data and AI to improve the efficacy of inspection services.
Human Collaboration through Interpretation
Hand-in-hand with respect for human autonomy comes the need for explainable AI. Put simply, the system spitting out hazard predictions and risk scores onto a safety officer’s tablet should be able to illuminate how it came to those decisions. The Technical Safety BC data team have employed Local Interpretable Model-agnostic Explanations, or LIME, to enable explanations. The method helps expose the level of importance the system gives to specific variables, said Kim.
“It’s how we help interpretations,” she said, noting that while “it doesn’t give 100% interpretability,” the LIME approach gives insights into how the system determines a risk score.
(Achieving algorithmic transparency and explainable AI will become increasingly difficult as systems grow in complexity. Learn more in RedTail’s feature story about this topic.)
“We have to give some cushion and provide opportunities for safety officers to exercise their own discretion.”
— Kim
Accounting for human autonomy and transparency were two specific recommendations included in Gen R’s assessment of Technical Safety BC’s program, presented in July 2017. Among the other recommendations: determine acceptable levels of machine autonomy, implement metrics to gauge transparency and safety officer trust in the system, and communicate the limits of the system and the importance of data quality and diversity.
Now, in its second phase of implementing these initial suggestions, Technical Safety BC has begun to incorporate ethical practices in other arenas, including the use of machine learning for revenue projections and budgeting, and in conjunction with installation of data-capturing sensors in elevators in select city buildings. These efforts could result in considering additional stakeholders who could be affected by the technologies, said Kim, adding, “It’s ongoing work.”
The Burgeoning Business of Algorithm Assessment
Rismani was just finishing her graduate work at U of BC in 2017 when she and the small team that would form Gen R spotted a need for a service that helps businesses to ensure ethical AI standards are incorporated from the early stages of AI use.
“At the time that we started there wasn’t a lot of buzz around this topic,” said Rismani.
The AI Ethics buzz has become more pronounced over the past year, and Gen R represents an emerging sector of companies or services created to facilitate development of AI technologies that are fair, transparent, explainable and accountable to multiple stakeholders.
Like Gen R, ORCAA, a firm launched by mathematician and author of Weapons of Math Destruction, Cathy O’Neil, takes a custom approach to assessing the quality of an algorithm and its potential impact on multiple stakeholders. Also, IBM and Accenture have introduced automated tools they claim remove bias from AI systems.
(Read RedTail’s Q&A with O’Neil to learn about ORCAA’s approach to algorithm assessment and why she is unimpressed by automated fairness tools. )
While GenR is not the only fledgling enterprise created to facilitate AI ethics-by-design, the market – particularly smaller firms and startups — may not be quite ready to work with these sorts of services, suggested Rismani. Large companies, for example, may have internal AI ethics programs underway that don’t require outside tech ethics consulting.
So, while Gen R waits for potential clients to catch up, Rismani and the team are laying groundwork by giving talks and presentations about designing for ethical AI. There is momentum, though, she said, noting that a year ago people didn’t even recognize the term AI Ethics. For now, “being present and active and talking” is key to fostering future interest, she said. “We’re still trying to figure out what the market looks like.”
Correction: Please note this story was altered after it was originally published to clarify the fact that Technical Safety BC is not a government-created agency, but rather an independent, non-government body that was mandated by British Columbia’s provincial government.
0 comments on “How a Pacific Northwest Safety Inspector Took Action on Its Ethical AI Mission”