Join the Founding Team: Australia's AI Safety Institute

Australia's AI Safety Institute is hiring—most applications close 18 January 2026. If you're an Australian citizen with relevant skills, apply. Know someone who'd be great? Encourage them to apply. Early hires could be exceptionally impactful.

Australia's recently announced AI Safety Institute (AISI) is recruiting for multiple positions, including senior leadership, research scientists, engineers, and risk specialists. This is an opportunity to join a founding team with potential for unusually high counterfactual impact.

The AISI will work at the frontier of AI, address AI-related risks and harms, and engage internationally to shape emerging global AI safety standards.

Early AISI hires could be exceptionally impactful

  • You are less replaceable. Talent pipelines into AI safety organisations in places like the US and UK are increasingly established. If you don't take a role there, someone roughly as qualified probably will. The local Australian talent pipeline is much thinner, meaning if you're eligible and a good fit for a role at the AISI, your counterfactual impact is much higher.
  • Early hires create momentum. Great initial talent attracts more great talent. The institute's early work will establish its reputation, direction, and influence for years to come.
  • The AISI could be a major player. Australia has strong diplomatic relationships, particularly across the Indo-Pacific region, and a demonstrated ability to establish international norms that other countries follow (e.g., tobacco plain packaging).

"As a founding member of the team, you will help shape how Australia monitors, tests and governs AI. You will assess risks from frontier models, including CBRN misuse, enhanced cyber capabilities, loss-of-control scenarios, information integrity and influence risks, and broader systemic risks arising from the deployment of increasingly capable general-purpose AI systems" – Department of Industry, Science and Resources

You should err on the side of applying

These job descriptions list ideal skill combinations, particularly for senior roles. In reality, candidates rarely tick every box. The Australian talent pool for AI safety is smaller than you'd think, and it's already being competed for by well-funded global tech companies and established international nonprofits.

If you are an Australian citizen with relevant expertise areas and a passion for AI safety, we strongly encourage you to apply.

We can provide support

Australians for AI Safety will host two events to provide context on AI safety in Australia, discuss the proposed role of AISIs, and share our tips for navigating Australian Public Service (APS) recruitment processes. Sign up for one here:

Have barriers or uncertainties that might stop you from applying? Good Ancestors think the AISI is important, and we want to do everything we can to ensure it goes well. If there is anything solvable holding you back (questions about partner visas, logistics of relocating back to Australia, security clearance questions, interview travel questions, career transition uncertainties, uncertainties about trade-offs, or anything else), email contact@goodancestors.org.au. We can share our perspective, help problem-solve, or connect you with other resources. The job listings also supply a contact in the Department who may be best placed to answer many questions.

Frequently asked questions

Which job should I apply for?

There are two senior leadership positions (General Manager and Head of AI Safety Research and Testing). These appear to roughly be a "head of AISI" and "chief scientist". Then there are three streams of other roles: technical research (AI Safety Research Scientist), technical engineering (AI Safety Engineer), and less technical policy/governance (AI Risk Specialist). In our experience, it's okay for candidates to apply for multiple roles.

Will I need to be Canberra-based?

No. Positions are available Australia-wide with flexible/remote work arrangements.

Do I need to be an Australian citizen?

Yes, Australian citizenship is required. The General Manager will need to pass baseline security clearance (NV1).

Could I be offered a role later without reapplying?

Applicants suitable for a role but not selected for the current vacancy may be placed in a merit list or pool for up to 18 months. If you agree, results may be shared with other APS agencies for similar roles, meaning you could be offered a future position without needing to reapply.

How technical do I need to be?

This varies by role. The AI Safety Research Scientist and AI Safety Engineer positions require hands-on technical experience with frontier AI models. The AI Risk Specialist positions require technical AI governance knowledge but less hands-on ML experience. The leadership positions require deep familiarity with technical AI and safety research, but primarily focus on strategic leadership and stakeholder management. You shouldn't take the reference to 'frontier AI models' strictly—for instance, if your experience has focused on open weight models, you should still apply.

What's your relationship to the AISI? Are you promoting this for Government?

No. We have no formal relationship with the Australian AISI. Good Ancestors is a not-for-profit that thinks the Australian AISI could have a positive impact in Australia and globally. We want to see it go well. The information in this document is our opinions based on our experience. You should read the job listings and other information provided by the Government, including the National AI Plan, for authoritative information.

Summary of job listings

AISI job listings are available on the APS website, here.

PositionSalaryClosesAPS ClassificationKey DutiesIdeal Candidate
General ManagerNot provided. (SES Band 1 salaries are subject to individual agreement; this could be in the range of $250k-350k.)1 FebSenior Executive Service Band 1
  • Lead AISI establishment and operations
  • Implement National AI Plan
  • Lead multi-disciplinary branch
  • Build relationships with AI developers
  • Advise government on safety measures and regulation
  • Deep familiarity with technical AI and safety research
  • Understanding of government regulatory systems
  • Knowledge of international AI safety frameworks
  • Tertiary qualifications in relevant disciplines
  • Experience in industry, startups, or academia
  • Ability to balance policy, technical, and external pressures
Head of AI Safety Research and Testing$180k-$200k18 JanExecutive Level 2
  • Design and lead testing and research program
  • Lead empirical methods design for frontier AI systems
  • Represent Australia internationally
  • Advise government on emerging AI risks and harms
  • Manage research publications and reports
  • 5+ years of empirical research experience
  • Experience evaluating frontier AI systems
  • Understanding of frontier risks and mitigations
  • Strong research publications track record
  • Multidisciplinary team leadership experience
  • International collaboration experience
AI Safety Research Scientist (Multiple Positions)$122k-$173k18 JanExecutive Level 1-2
Senior position:
  • Lead evaluation methods and analysis
  • Deliver research and testing program
  • Represent Australia internationally
  • Advise policymakers
  • Collaborate with industry, civil society, and academia
  • Lead research publications and reports
Standard position:
  • Contribute to evaluation methods and analysis
  • Represent Australia internationally
  • Translate technical findings into policy insights
  • Collaborate with industry, civil society, and academia
  • Contribute to research publications and reports
Senior position:
  • 5+ years of empirical research experience
  • Experience evaluating frontier AI systems
  • Understanding of frontier risks and mitigations
  • Strong research publications track record
  • International collaboration experience
Standard position:
  • 3+ years of empirical research experience
  • Experience evaluating frontier AI systems
  • Strong analytical and problem-solving skills
  • Experience drafting research publications
  • Demonstrated interest in international collaboration
AI Safety Engineer (Multiple Positions)$122k-$173k18 JanExecutive Level 1-2
Senior position:
  • Build and operate evaluation tooling
  • Run large-scale behavioral tests
  • Diagnose failure modes and assess risks
  • Produce technical documentation
  • Improve engineering practices and infrastructure
  • Collaborate with industry, civil society, and academia
  • Contribute to research publications and reports
Standard position:
  • Support evaluation tooling development
  • Assist with behavioral testing
  • Help identify failure modes and assess risks
  • Produce technical documentation
  • Collaborate with industry, civil society, and academia
  • Contribute to research publications and reports
Senior position:
  • 5+ years working with frontier AI systems
  • Experience designing and executing safety evaluations
  • Experience implementing technical safeguards
  • Experience managing large-scale evaluations
  • Understanding of AI failure modes
Standard position:
  • 3+ years working with frontier AI systems
  • Experience with safety evaluation tooling
  • Experience implementing safety mitigations
  • Experience contributing to large-scale evaluations
  • Understanding of AI failure modes
  • Multidisciplinary team experience
AI Risk Specialist$122k-$130k18 JanExecutive Level 1
  • Provide technical advice on AI governance
  • Translate research for policymakers
  • Monitor frontier AI developments
  • Collaborate with industry, civil society, and academia
  • Represent Australia internationally
  • Experience in AI safety, security, or governance
  • Knowledge of technical AI governance
  • Experience synthesizing research into guidance
  • Understanding of large language models
  • Diverse stakeholder collaboration experience
  • Strong communication skills
AI Risk Specialist$99k-$107k18 JanAPS Level 6
  • Support AI governance advice
  • Assist with translating research for policymakers
  • Monitor frontier AI developments
  • Collaborate with industry, civil society, and academia
  • Contribute to international engagement
  • Interest or experience in AI safety, security, or governance
  • Knowledge of technical AI governance
  • Ability to synthesize research into guidance
  • Understanding of large language models
  • Strong communication and collaboration skills

This document is published independently by Good Ancestors (not by or on behalf of Government). We strongly recommend that you to read the APS Jobs website, read information about the AISI provided by Government, including the National AI Plan, and read diverse sources about applying for APS Jobs.