article
Technical Infrastructure for AI oversight
From 13-19 July, many of the world’s leading researchers in machine learning and artificial intelligence will gather at the International Conference on Machine Learning (ICML) in Vancouver.
This year, Lisa Soder will be co-hosting a section of the conference focused on a key challenge that governments and regulators around the world are already facing—or will soon confront—as they work to ensure that the most powerful AI systems on the market are safe, lawful, and ethical: how to build the technical infrastructure needed for meaningful oversight.
Policies exist on paper, but both regulators and AI companies often lack the technical tools to implement them effectively. How do we actually monitor model capabilities? Enable reliable external audits? Detect problematic training data? Build systems that can halt dangerous AI activities?
This workshop session will bring together AI researchers and representatives from governance institutions, including the US Center for AI Standards and Innovation, the UK AI Security Institute, the EU AI Office, and the Canadian AI Safety Institute. If you’re interested in what will be discussed in more detail or in the people who specialize in this field check out our workshop website.
Please get in touch with Lisa Soder if you have any questions or want to meet in Vancouver!
Author
Lisa Soder
Senior Policy Researcher / Acting Head Technical AI Governance