Research Associate

US or UK / Remote-friendly

We are looking for Research Associates to support our work at the forefront of AI safety. As Research Associate, you will be responsible for assisting and supporting complex research programs in collaboration with our member firms and for aiding the development of consensus best practices for frontier AI safety. You will also provide research support for Forum leadership on select initiatives. 

As Research Associate, you will likely assist our work on frontier AI capabilities evaluations, risks assessments and mitigation measures across several domains, including but not limited to: 

  • Chemical, biological, radiological and nuclear (CBRN) threats
  • Advanced cybersecurity threats
  • Persuasion, deception, and malicious use threats

As Research Associate, you will be adept at supporting senior staff and leadership, providing critical research, writing and editorial assistance while also helping to drive forward complex collaborative projects. 

About the FMF

The Frontier Model Forum is an industry non-profit dedicated to the safe development and deployment of frontier AI models. By drawing on the technical and operational expertise of our members, we aim to 1) identify best practices for frontier AI safety and support the development of frontier AI safety standards, 2) advance AI safety research for frontier models, and (3) facilitate information sharing about frontier AI safety among government, academia, civil society and industry. 

At the Frontier Model Forum (FMF), we value diversity of experience, knowledge, backgrounds and perspectives, harnessing these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.  

Key responsibilities

  • Work with Research Science Leads and Forum leadership to implement a portfolio of timely research workshops and outputs
  • Help organize multiple working groups and research-oriented initiatives, delivering against a range of research objectives
  • Research and draft memos for use as structured read-aheads for working group meetings and for potential publication on the FMF’s website 
  • Conduct and write comprehensive literature reviews on various AI safety research topics
  • Draft and compile research surveys on various topics for circulation within expert networks

Additional responsibilities

  • Provide basic administrative support for meetings and opportunities for internal and external collaboration on research programs
  • Help to coordinate expert workshops, liaising with key experts and researchers from FMF firms and non-member organizations
  • Take part in informational expert interviews across member firms and external stakeholders, helping to memorialize areas of convergence and divergence among emerging safety practices
  • Monitor AI safety literature and research, staying abreast of key developments in capabilities evaluations, risk assessments, interpretability, and related topics

About You

You may be a good fit for Research Associate if you:

  • Have a clear passion for advancing frontier AI safety and actively seek opportunities to expand your domain knowledge
  • Have strong communication skills and an ability to develop constructive relationships with key partners and stakeholders
  • Thrive at organizing, facilitating, and synthesizing expert workshops and research convenings 
  • Know how to draft, revise, and finalize collaborative research documents 
  • Have experience supporting teams and leadership in fast-paced and constantly changing environments

You may be a strong candidate if you:

  • Have a MS or advanced training in a STEM field or computational social science 
  • Have familiarity and/or experience carrying out risk assessments on general-purpose models, including via automated evaluations and/or redteaming
  • Have familiarity with modern deep learning architectures, including natural language processing, computer vision, and/or multi-modal models 

To Apply 

Please send a cover letter and a resume to Applications will be considered on a rolling basis.