Today we are announcing a new cohort of 11 grantees who have received more than $5 million through the AI Safety Fund (AISF). As frontier AI systems become more powerful and widely deployed, advancing our understanding of them and building robust safety tools is essential – which is why the AISF issued several requests for proposals late last year in Biosecurity and Cybersecurity, as well as AI Agent Evaluation and Synthetic Content.
Spanning diverse approaches to frontier AI safety and security, the funded projects include:
- Apollo Research, Building black box scheming monitors for Frontier AI agents
- California Institute of Technology, AI-driven Detection of Protein Mimetic Biothreats with BioSentinel
- Institute for Decentralized AI (part of Cosmos Institute), Scalable, Decentralized Oversight for Multi-Agent Networks
- Faculty AI, Automated Red-Teaming for Biosecurity Risks
- FAR.AI, Quantifying the Safety-Adversary Gap in Large Language Models
- FutureHouse, Inc., Pioneering AI-Driven Experimental Design: Benchmarks for Responsible Innovation
- Morgan State University, Evaluating AI-Assisted Cybersecurity Operations: Comparative Analysis of Human Performance with and without AI Tools
- Nemesys Insights LLC, ICS Benchmark and Human Uplift Study
- SecureBio, Evaluations to Assess Agent AIs’ Execution of Tasks That Could Enable Large-scale Harm
- University of Illinois Urbana – Champaign, Cybersecurity Risk Evaluations of AI Agents with Computer Interaction Capabilities
- University of Toronto, Analyzing the Emergent Role of Sanctioning in Regulating Multi-Agent LLM Systems
The projects were selected from over 100 competitive proposals through a rigorous review process. As with the initial cohort of grantees, we are excited to support each of the AISF recipients and look forward to their scientific contributions and impact.
Update on the AI Safety Fund
With over $10 million in funding, the AISF was established in late 2023 as a collaborative initiative among leading AI developers and philanthropic partners. It aims to support and expand the field of AI safety research to promote the responsible development and deployment of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
The Meridian Institute initially managed and oversaw the fund. Support for the AISF came from the founding members of the Frontier Model Forum (FMF) – Anthropic, Google, Microsoft, and OpenAI – as well as philanthropic partners such as the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Schmidt Sciences, and Jaan Tallinn.
After the Meridian Institute announced in June 2025 that it would be winding down its operations, the FMF began managing the fund directly. The remaining funds of the AISF will be used to support narrowly-scoped research projects that target urgent bottlenecks to further progress in AI safety and security.
We look forward to continuing to advance the science of frontier AI safety and security.