Join the Founding Team: Australia's AI Safety Institute
Australia's AI Safety Institute is hiring—most applications close 18 January 2026. If you're an Australian citizen with relevant skills, apply. Know someone who'd be great? Encourage them to apply. Early hires could be exceptionally impactful.
Australia's recently announced AI Safety Institute (AISI) is recruiting for multiple positions, including senior leadership, research scientists, engineers, and risk specialists. This is an opportunity to join a founding team with potential for unusually high counterfactual impact.
The AISI will work at the frontier of AI, address AI-related risks and harms, and engage internationally to shape emerging global AI safety standards.
Early AISI hires could be exceptionally impactful
- You are less replaceable. Talent pipelines into AI safety organisations in places like the US and UK are increasingly established. If you don't take a role there, someone roughly as qualified probably will. The local Australian talent pipeline is much thinner, meaning if you're eligible and a good fit for a role at the AISI, your counterfactual impact is much higher.
- Early hires create momentum. Great initial talent attracts more great talent. The institute's early work will establish its reputation, direction, and influence for years to come.
- The AISI could be a major player. Australia has strong diplomatic relationships, particularly across the Indo-Pacific region, and a demonstrated ability to establish international norms that other countries follow (e.g., tobacco plain packaging).
You should err on the side of applying
These job descriptions list ideal skill combinations, particularly for senior roles. In reality, candidates rarely tick every box. The Australian talent pool for AI safety is smaller than you'd think, and it's already being competed for by well-funded global tech companies and established international nonprofits.
If you are an Australian citizen with relevant expertise areas and a passion for AI safety, we strongly encourage you to apply.
We can provide support
Australians for AI Safety will host two events to provide context on AI safety in Australia, discuss the proposed role of AISIs, and share our tips for navigating Australian Public Service (APS) recruitment processes. Sign up for one here:
- Monday 22 December at 12pm AEDT (suitable for US/Asia timezones)
- Tuesday 6 January at 8pm AEDT (suitable for UK/Europe timezones)
Have barriers or uncertainties that might stop you from applying? Good Ancestors think the AISI is important, and we want to do everything we can to ensure it goes well. If there is anything solvable holding you back (questions about partner visas, logistics of relocating back to Australia, security clearance questions, interview travel questions, career transition uncertainties, uncertainties about trade-offs, or anything else), email contact@goodancestors.org.au. We can share our perspective, help problem-solve, or connect you with other resources. The job listings also supply a contact in the Department who may be best placed to answer many questions.
Frequently asked questions
Which job should I apply for?
There are two senior leadership positions (General Manager and Head of AI Safety Research and Testing). These appear to roughly be a "head of AISI" and "chief scientist". Then there are three streams of other roles: technical research (AI Safety Research Scientist), technical engineering (AI Safety Engineer), and less technical policy/governance (AI Risk Specialist). In our experience, it's okay for candidates to apply for multiple roles.
Will I need to be Canberra-based?
No. Positions are available Australia-wide with flexible/remote work arrangements.
Do I need to be an Australian citizen?
Yes, Australian citizenship is required. The General Manager will need to pass baseline security clearance (NV1).
Could I be offered a role later without reapplying?
Applicants suitable for a role but not selected for the current vacancy may be placed in a merit list or pool for up to 18 months. If you agree, results may be shared with other APS agencies for similar roles, meaning you could be offered a future position without needing to reapply.
How technical do I need to be?
This varies by role. The AI Safety Research Scientist and AI Safety Engineer positions require hands-on technical experience with frontier AI models. The AI Risk Specialist positions require technical AI governance knowledge but less hands-on ML experience. The leadership positions require deep familiarity with technical AI and safety research, but primarily focus on strategic leadership and stakeholder management. You shouldn't take the reference to 'frontier AI models' strictly—for instance, if your experience has focused on open weight models, you should still apply.
What's your relationship to the AISI? Are you promoting this for Government?
No. We have no formal relationship with the Australian AISI. Good Ancestors is a not-for-profit that thinks the Australian AISI could have a positive impact in Australia and globally. We want to see it go well. The information in this document is our opinions based on our experience. You should read the job listings and other information provided by the Government, including the National AI Plan, for authoritative information.
Summary of job listings
AISI job listings are available on the APS website, here.
| Position | Salary | Closes | APS Classification | Key Duties | Ideal Candidate |
|---|---|---|---|---|---|
| General Manager | Not provided. (SES Band 1 salaries are subject to individual agreement; this could be in the range of $250k-350k.) | 1 Feb | Senior Executive Service Band 1 |
|
|
| Head of AI Safety Research and Testing | $180k-$200k | 18 Jan | Executive Level 2 |
|
|
| AI Safety Research Scientist (Multiple Positions) | $122k-$173k | 18 Jan | Executive Level 1-2 | Senior position:
Standard position:
| Senior position:
Standard position:
|
| AI Safety Engineer (Multiple Positions) | $122k-$173k | 18 Jan | Executive Level 1-2 | Senior position:
Standard position:
| Senior position:
Standard position:
|
| AI Risk Specialist | $122k-$130k | 18 Jan | Executive Level 1 |
|
|
| AI Risk Specialist | $99k-$107k | 18 Jan | APS Level 6 |
|
|
This document is published independently by Good Ancestors (not by or on behalf of Government). We strongly recommend that you to read the APS Jobs website, read information about the AISI provided by Government, including the National AI Plan, and read diverse sources about applying for APS Jobs.