Your trusty RedTail journo hasn’t posted here in awhile…true. But AI ethics and other AI reporting remains underway at my paying gig, Protocol, where I’m senior reporter covering AI and data. Here are some of my stories that RedTail readers should check out:
Twitter’s AI ethics director is locked in a legal battle with the CEO of the startup she founded
Embroiled in a legal battle over leadership rights and shareholder voting power, Rumman Chowdhury has been accused of “strong-arm and intimidation tactics” by the CEO of Parity.
A surveillance AI firm with hidden ties to China is seeking US infrastructure contracts
Remark Holdings, whose AI-based surveillance technologies have deep but mostly hidden ties to China, already has a partnership with a Florida high-speed rail provider. Now, the company wants to pursue additional U.S. infrastructure projects.
For people with disabilities, AI can only go so far to make the web more accessible
AI and automated software that promises to make the web more accessible abounds, but people with disabilities and those who regularly test for digital accessibility problems say it can only go so far.
Surveillance AI needs fake data to track people. These companies are supplying it.
Synthetic data suppliers promise that the fake data they provide can reduce bias in AI, but it also helps build controversial technologies used to monitor people’s behavior and interpret their emotions and body language.
Intel calls its AI that detects student emotions a teaching tool. Others call it ‘morally reprehensible.’
Virtual school software startup Classroom Technologies will test the controversial “emotion AI” technology.
Companies are using AI to monitor your mood during sales calls. Zoom might be next.
Software-makers claim that AI can help sellers not only communicate better, but detect the “emotional state” of a deal — and the people they’re selling to.