AI Safety Fund initiates first round of research grants


Posted on:

Abstract angular dark shape on a mate green background

How the AI Safety Fund will advance the field of frontier model safety research

At the FMF, we believe that fostering safe AI development and deployment requires cultivating a vibrant research community. That’s why we’re supporting technical research to improve AI safety and enable independent, standardized evaluations of frontier AI capabilities and risks. 

Read more about the AI Safety Fund in the update from the fund’s independent administrator Meridian Prime below.

About the AI Safety Fund

The AI Safety Fund (AISF) is a $10 million+ initiative, born from a collaborative vision of leading AI developers and philanthropic partners. Funders include the Frontier Model Forum’s founding members Anthropic, Google, Microsoft, and OpenAI, with support from philanthropic partners, including the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Schmidt Sciences, and Jaan Tallinn.  

Administered independently by the Meridian Institute, the AISF awards research grants to independent researchers to address some of the most critical safety risks associated with the proliferated use of frontier AI systems. 

The purpose of the fund is to support and expand the field of AI safety research to promote the responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. We seek to attract and support the brightest minds across the AI ecosystem to build frontier models aligned with human values. 

The AI Safety Fund supports research on state-of-the-art, general purpose AI models. Funding will only be awarded to projects researching deployed versions of those models. 

Funding Opportunities and Research Priorities 

The Artificial Intelligence Safety Fund (AISF) is pleased to announce the first round of research grants is being awarded to a diverse group of researchers investigating frontier AI safety. This initial round will directly support research aligned with the principle objective of the AISF: to support and expand the AI safety research community to enable independent, standardized evaluations of frontier AI capabilities and risks, and to promote the responsible development of frontier AI models. 

The initial grants will be awarded to solicited grantees researching new  methods for evaluating the capabilities and risks of frontier models, including evaluations, red-teaming, and benchmarking. Grantee awards will be publicly announced on the AISF website in July. Research outcomes will be publicly available on the website and additional opportunities to share work funded by the AISF will be considered with guidance from an Advisory Committee. 

While AI has tremendous potential to benefit the common good, appropriate testing, evaluation, and best practices are required to mitigate risks. As AI is increasingly applied in high-stakes situations, ensuring safety is crucial to avoid negative outcomes and build public trust. From evaluations of dangerous capacities to ensuring alignment with human values, independent research is a critical element to ensuring frontier AI is developed and deployed safely. 

In future open call grant rounds, the AISF will prioritize technical research in three core areas related to frontier AI models: 

  • Identifying safety-critical risks posed by frontier models
  • Evaluating and assessing strategies for addressing such risks 
  • Implementing mitigations to prevent these risks from occurring

The AISF is committed to supporting research from across the diverse field of AI safety, and is eager for input on the intended research agenda. Researchers working on frontier AI safety from across disciplines who would like to share insights or inquire about how to participate should complete the research interest form HERE.

For more information, please visit