Year in Review: Building a Safer Future Together


Posted on:

2023 has been an extraordinary year for AI. As chatbots surged in popularity and generative AI went mainstream, the promise and potential of AI gained worldwide attention. Yet at the same time, the powerful new capabilities of frontier AI systems also raised concerns about risks. It is important that we act now to ensure models are built safely and responsibly. 

The Frontier Model Forum’s mission is to do just that: advance the safe development and deployment of frontier AI systems. Since I joined the FMF as Executive Director in October, we’ve been busy convening industry leaders, meeting with top voices from civil society and academia, assembling a team, and making plans to hit the ground running in 2024. 

Recap of the FMF’s first few months

A bit more about what the Frontier Model Forum has been up to since launching in July: 

  • Building a strong foundation: We are establishing workstreams with our 4 founding member companies – Anthropic, Google DeepMind, Microsoft, and OpenAI – to develop shared best practices and standards around the safe development of frontier AI models. This effort will help inform a collaborative framework for responsible innovation and ensure we’re able to understand and mitigate potential risks.
  • Investing in solutions: We proudly announced an initial $10 million investment in the AI Safety Fund, which will support independent research to explore the development of risk assessments, evaluations, and mitigation techniques that can help raise safety and security standards across the industry. We’ll be sharing more details about the application process and funding criteria early in the new year.
  • Sharing knowledge and expertise: We met with leaders from across the AI ecosystem to discuss industry best practices and governance frameworks. Collaboration is core to the FMF’s mission, and we will continue pushing for alignment on safety best practices across industry, academia, and the policy community.

What’s next 

As we step into 2024, the FMF is focused on several key initiatives:

  • AI Safety Fund in action: We’ll be issuing a formal call for proposals early in 2024, inviting researchers to apply for grants that will grow the science of AI safety research. 
  • Advisory Board: We’ll be announcing the FMF’s Advisory Board, comprised of leaders with diverse expertise who will help shape the FMF’s future work and research. 
  • Empowering through knowledge: We’ll be publishing white papers on critical industry topics and themes of AI safety, building on our initial release on red teaming approaches. These resources will equip frontier developers and policy audiences alike with the knowledge to responsibly navigate the complex landscape of frontier AI.
  • Onboarding new members. We’ll also be opening to new members in the new year, and aim to welcome frontier AI firms that share a similar commitment to AI safety.

Stay in touch

Want to hear more from the Frontier Model Forum? We’ll be launching a newsletter to keep you informed about our work and plans for the coming months – sign up here. You can also follow us on X and LinkedIn to hear more from the FMF and join the conversation.

Thanks for reading,

Chris Meserole