Meet Chris Meserole
Chris Meserole is the Executive Director of the Frontier Model Forum and an expert on AI safety, international governance, and global cooperation. He is currently focused on the development of standards and best practices for AI safety and evaluation, particularly for advanced models whose capability profile remains unknown.
Prior to the Frontier Model Forum, Chris served as Director of the AI and Emerging Technology Initiative at the Brookings Institution and a fellow in its Foreign Policy program. Established in 2018, the Initiative has aimed to advance the responsible governance of AI by supporting a wide range of high-profile efforts across Brookings, from research on the impact of AI on bias and discrimination to its implications for global inequality and democratic legitimacy.
In his own work, Meserole has worked extensively on safeguarding large-scale AI systems against the risks of accidental or malicious use. Among other efforts, Meserole has co-led the first global multi-stakeholder group on recommendation algorithms and violent extremism for the Global Internet Forum on Counter Terrorism; published and testified on the challenges posed by AI-enabled surveillance and repression; and organized a US-China track 2 dialogue on AI and national security, with a focus on AI safety and testing and evaluation. As a member of the Christchurch Call Advisory Network, he also opened the session on algorithmic transparency at the 2022 Christchurch Call Leadership Summit chaired by President Macron and Prime Minister Ardern.
Meserole has a background in interpretable machine learning and computational social science. He has regularly advised high-level government, industry and civil society leaders and his research has appeared or been featured in the New Yorker, New York Times, Foreign Affairs, Foreign Policy, Wired, and other publications.
“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone.”
Kent Walker, President, Global Affairs, Google & Alphabet
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Brad Smith, Vice Chair & President, Microsoft
“Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
Anna Makanju, Vice President of Global Affairs, OpenAI
“Anthropic believes that AI has the potential to fundamentally change how the world works. We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”
Dario Amodei, CEO, Anthropic