3D abstract shape flowing in an S-curve made out of thin curved and rounded sheets. Roughly looks like a spine.

Frontier Model Forum: Advancing frontier AI safety

The Frontier Model Forum draws on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, advancing AI safety research and supporting efforts to develop AI applications to meet society’s most-pressing needs.

Anthropic
Google
Microsoft
OpenAI

What the Frontier Model Forum does

Governments and industry agree that, while advanced AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others.

To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly. The FMF is one vehicle for cross-organizational discussions and actions on AI safety and responsibility.


Amazon and Meta join the Frontier Model Forum to promote AI safety

We are excited to share that Amazon and Meta have joined the Frontier Model Forum to collaborate on AI safety alongside founding members Anthropic, Google, Microsoft and OpenAI.

Core objectives of the Forum

As pioneers in the AI landscape, the Frontier Model Forum is committed to turning vision into action. We recognize the importance of safe and responsible AI development, and we’re here to make it happen.

Advancing AI safety research

Research will help promote the responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.

Identifying best practices

Best practices for the responsible development and deployment of frontier models are essential, as is helping the public understand the nature, capabilities, limitations, and impact of the technology.

Collaborating across sectors

Policymakers, academics, civil society, and companies must work together and share knowledge about trust and safety risks.

Help AI meet society’s greatest challenges

Support efforts to develop applications to address issues like climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

Join us in turning these objectives into reality, as we shape the future of AI.