The winds of regulatory oversight for artificial intelligence are blowing in the U.S. and Europe. The European Commission signed off on its Ethical Guidelines for Trustworthy AI earlier this month, the culmination of several months of deliberations by a select group of “high-level experts” plucked from industry, academia, research and government circles. In the advisory realm, the EU guidance joins forthcoming draft guidance on AI from a global body, the Organization for Economic Cooperation and Development.
Meanwhile, U.S. federal lawmakers want something on the books. A new bill proposed by Sen. Ron Wyden, D-Ore., Sen. Cory Booker, D-N.J., and Rep. Yvette D. Clarke, D-N.Y., would require large corporations to subject their algorithmic systems to automated decision system and data protection impact assessments. And, in February, U.S. representatives proposed their own guidelines for ethical AI in a House Resolution.
The EU guidelines emphasize “lawful” AI, which seems to be an important distinction from an earlier draft. Why? For IAPP’s Privacy Tech, RedTail’s Kate Kaye asked sources — including one of those “high-level experts” — what “lawful” AI means when it comes to the future of EU regulation for AI.