Our purpose

Governments and industry agree that, while advanced AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the U.S. and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others.  

To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly. The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.   

How the Forum will work

Over the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, drawing from a diversity of backgrounds and perspectives.

The founding companies will also establish key institutional arrangements including a charter, governance and funding with a working group and executive board to lead these efforts. We plan to consult with civil society and governments in the coming weeks on the work of the Forum and on meaningful ways to collaborate.

The Frontier Model Forum welcomes the opportunity to help support and feed into existing government and multilateral initiatives such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council.

The Forum will also seek to build on the valuable work of existing industry, civil society and research efforts across its workstreams. Initiatives such as the Partnership on AI and MLCommons make vital contributions across the broader AI community and the Forum aims to support and engage with those and related multi-stakeholder efforts.