Research Science Lead

US or UK / Remote-friendly

We are looking for Research Science Leads to advance select workstreams at the forefront of AI safety. As Research Science Lead, you will be responsible for managing complex programs in collaboration with our member firms and for facilitating the development of consensus best practices for frontier AI safety. 

We are currently looking for Research Science Leads to carry forward our work on capabilities evaluations, risks assessments and mitigation measures for frontier AI, including those relating to: 

  • Chemical, biological, radiological and nuclear (CBRN) threats
  • Advanced cybersecurity threats
  • Persuasion, deception, and malicious use threats

As a Research Science Lead, you will be adept at fostering collaboration, monitoring and synthesizing AI safety research, and managing multiple research programs simultaneously. This will include organizing and facilitating workshops with domain experts from our member firms, as well as drafting, revising, and finalizing best practice guidelines and research briefs.

About the FMF

The Frontier Model Forum is an industry non-profit dedicated to the safe development and deployment of frontier AI models. By drawing on the technical and operational expertise of our members, we aim to 1) identify best practices for frontier AI safety and support the development of frontier AI safety standards, 2) advance AI safety research for frontier models, and (3) facilitate information sharing about frontier AI safety among government, academia, civil society and industry. 

At the Frontier Model Forum (FMF), we value diversity of experience, knowledge, backgrounds and perspectives, harnessing these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.  

Key responsibilities

  • Provide research workstream direction and implementation, working with Forum leadership to develop a portfolio of timely research workshops and outputs
  • Organize, moderate and lead multiple working groups and research-oriented initiatives, delivering against a range of research objectives
  • Research and develop white papers for use as structured read-aheads for working group meetings and for potential publication on the FMF’s website 
  • Act as a key partner to Forum leadership, helping to inform and shape the team’s research strategy
  • Independently represent the FMF’s strategy and narrative when collaborating with a wide array of stakeholders 
  • Facilitate conversations, meetings and opportunities for internal and external collaboration on research programs
  • Evaluate the success of research programs, workstreams, and workshops against their aims, goals and objectives

Additional responsibilities

  • Lead on and coordinate expert workshops, liaising with key experts and researchers from FMF firms and non-member organizations
  • Balance making progress on long-term scientific objectives of the FMF with regular short-term technical deliverables (i.e, workshops, memos, and guidelines)
  • Conduct informational conversations across research teams in member firms and external stakeholders to identify convergence and divergence among emerging safety practices
  • Monitor AI safety literature and research, staying abreast of key developments in capabilities evaluations, risk assessments, interpretability, and related topics
  • Work with Forum leadership to identify opportunities to develop best practices, guidelines, and other public goods

About You

You may be a good fit for Research Science Lead if you:

  • Have a clear passion for advancing frontier AI safety and actively seek opportunities to expand your domain knowledge
  • Have significant experience in program management and driving forward collaborative projects
  • Have strong communication skills and an ability to develop constructive relationships with key partners and stakeholders
  • Thrive at organizing, facilitating, and synthesizing expert workshops and research convenings 
  • Know how to draft, revise, and finalize collaborative research documents 
  • Have experience supporting teams and leadership in fast-paced and constantly changing environments

You may be a strong candidate if you:

  • Have a PhD or advanced degree in a STEM field or computational social science 
  • Have familiarity and experience with training deep learning algorithms, including for natural language processing, computer vision, and/or multi-modal models 
  • Have experience carrying out risk assessments on general-purpose models, including via automated evaluations and/or redteaming

To Apply 

Please send a cover letter and a resume to Applications will be considered on a rolling basis.