Our mission

The Frontier Model Forum is an industry non-profit dedicated to advancing the safe development and deployment of frontier AI systems. 

The Forum aims to: 

  1. Identify best practices and support standards development for frontier AI safety.
  2. Advance independent research and science of frontier AI safety.
  3. Facilitate information sharing about frontier AI safety among government, academia, civil society and industry. 

By drawing on the technical and operational expertise of its member firms, the Forum seeks to ensure the safety of the most advanced AI models so that they can meet society’s most pressing needs.

AI has the potential to benefit society in transformational ways. And as with any major technological shift, we must also be prepared for the risks that come with the benefits. 

Our focus

The FMF is focused on addressing the risks to public safety and critical infrastructure posed by advanced AI systems. We aim to advance the best practices for assessing and mitigating those risks while also advancing the science and research of AI safety. 

We are committed to working with stakeholders across the global AI safety ecosystem to find collaborative solutions that will make AI systems safe and responsible. 

Our scope

“Frontier AI” refers broadly to those general purpose AI models that constitute the state of the art, a collection which will shift over time as the field progresses. For purposes related to its membership, the FMF defines a “frontier AI model” as a general-purpose model that outperforms, based on a range of conventional performance benchmarks or high-risk capability assessments, all other models that have been widely deployed for at least 12 months.

The FMF’s definition is intentionally more expansive than only “current state of the art” models, in order to include a wider set of stakeholders who are in a position to contribute to the FMF’s work. 

Stylized example of how the capability frontier shifts over time.

Our governance

The Frontier Model Forum is an industry-supported non-profit 501(c)(6), established in 2023. In line with its mission, the Forum provides public benefits related to AI safety and does not engage in lobbying.

The Forum is led by its Executive Director, Chris Meserole, and overseen by an operating board made up of representatives from its member organizations. The work of the Forum will also be informed by a global advisory board comprised of leaders from across the AI ecosystem, including academia and civil society. 

The FMF is funded by member fees.