Kate Kaye’s AI and Data Reporting in Protocol

I was senior AI and data reporter at Protocol, which abruptly shut down in mid-November 2022. As someone who has always approached my tech reporting through the lens of its effects on people and society, Protocol was a home for some of the most meaningful work of my 20+ year journalism career.

Not only did my reporting get attention (including prompting the American Civil Liberties Union, Electronic Privacy Information Center and Fight for the Future to demand that Zoom end plans to use controversial emotion AI), it exposed the limits of marketing-driven tech ethics platitudes and questioned the misconceptions and misaligned incentives behind the so-called “AI race” narrative driving US policy related to AI and China tech.

Below are links to some of my best reporting published in the pages of Protocol in 2021 and 2022.

A groundbreaking series
While reporting my multi-part series analyzing the so-called “AI race” between the U.S. and China, I embarked on what my editor Tom Krazit called “one of the most ambitious editorial projects we’ve attempted in the short history of this company.” The “AI race” concept underpins some of the most important government policies and trade sanctions affecting foreign relations between two of the world’s largest economic powers and drives media coverage of national security, AI, data, chips and other tech. My series — produced with the help of an amazing team of people at Protocol — added much-needed nuance to a conversation plagued by hype, assumption and a lack of introspection.

Original reporting that sparks advocacy
Human rights groups to Zoom: Stop any emotion AI plans

Deeply reported profiles of people and companies affecting AI and data
Why Eric Schmidt became an AI cold war hype master
Kai-Fu Lee tried to teach the US about Chinese AI and got a rivalry
How Microsoft helped build AI in China
How 50-year-old data giant Acxiom learned to accept the cloud
Google Cloud’s AI head weighs academic ideals versus corporate reality

Tech ethics reporting that questions platitudes
Not my job: AI researchers building surveillance tech and deepfakes resist ethical concerns
For people with disabilities, AI can only go so far to make the web more accessible

Balanced, explanatory analysis of emerging AI software and products
Doctors turn to imperfect AI to spend more quality time with patients
Intel thinks its AI knows what students think and feel in class
Companies are using AI to monitor your mood during sales calls
AI builds tomorrow’s major leaguers with few biometric data controls
Why low-code and no-code AI tools pose new risks
Why AI fairness tools might actually cause more problems

Illuminating new industry sectors and underreported industry trends
Why AI and machine learning are drifting away from the cloud
With Delta Lake, Databricks sparks an open-source nerd war and customer confusion
These companies make fake data that builds AI
Despite the questionable optics, AI startups want military contracts
These startups want to help prove responsible AI can be profitable

Thoughtful, non-reactionary analysis that advances the news
SafeGraph CEO: ‘It’s good that we were called out’ over abortion data
When SafeGraph pulled abortion clinic data, it stranded researchers
The Roe decision could change how advertisers use location data
How state abortion politics forced Proov off AWS and onto Google Cloud
Why the FTC put data broker Kochava in the spotlight
Why Cerner could give Oracle a $28.3 billion headache

Smart analysis of policy and government moves affecting tech and people
The FTC’s ‘profoundly vague’ mission to kill algorithms will be messy

Inside Google’s plan for a $500M-a-year national AI research cloud
Controversial driver-monitoring AI companies just got a boost from DC
Startups are likely to get access to the national AI research cloud
AWS, Microsoft warned of China’s AI threat while growing AI hubs there