Home / AI Safety / Pre-Budget Submission 2026-27
AI SafetyJanuary, 2026

Pre-Budget Submission 2026-27

Department of the Treasury

Background

In January 2026, Good Ancestors made a pre-budget submission to Treasury following the Government's November 2025 announcement of the Australian AI Safety Institute (AISI). The Government's announcement is an excellent step in addressing harms from AI. However, the AISI's $30 million funding over four years is insufficient for its extensive mandate: serving as the whole-of-government coordination hub, advising ministers and regulators, supporting compliance across sectors, and addressing both upstream risks and downstream harms.

Our submission

Our submission calls for increased AISI funding to match its scope. It also outlines opportunities for AI data centres and the AI Assurance Technology industry to secure Australia's place in the AI value chain.

AI presents significant economic opportunity. By 2030, AI could contribute between $45 billion and $115 billion annually to Australia's economy—equivalent to 2-5 per cent of GDP. Companies announced plans to invest upwards of $100 billion in Australian data centres between 2023 and 2025, with Government positioning Australia as a "leading destination for data centre investment." Beyond adoption, Australia has opportunities in the AI Assurance Technology (AIAT) industry, predicted to reach USD 276 billion by 2030.

Inadequate risk management stifles opportunity. Australians are among the least trusting of AI globally. 96% hold concerns about generative AI, and only 36% trust AI systems. Distrust impacts adoption rates, and adoption rates shape whether the economic value Australia captures by 2030 is more like $45 billion or $115 billion. Australian businesses want to adopt AI but lack adequate tools, frameworks, and knowledge to manage current and emerging risks.

The risks threaten more than adoption. In 2025, both OpenAI and Google warned that their leading models crossed chemical, biological, radiological, and nuclear (CBRN) risk thresholds. Google assessed that Gemini 2.5 Deep Think reached the "early warning threshold" for its CBRN risk standard. One AI agent ranked in the top 5% of over 400 teams in cybersecurity competitions, while AI systems are approaching and surpassing human performance across problem-solving, scientific reasoning, and persuasion.

The AISI—Government's key trust-building initiative—is not adequately resourced. Government allocated $30 million over four years to the AISI, averaging $7.5 million per year. Good Ancestors surveyed 139 professionals with expertise in AI safety, governance, and related fields—53.3% recommended over $50 million per year, and 77% recommended at least $25 million annually. For every $1,000 of predicted AI economic benefit to Australia by 2030, Government is investing only 7 cents in the AISI. By comparison, Government earmarked $166 million for GovAI Chat, its internal AI chat tool for the Australian Public Service—more than five times the AISI budget for an internal productivity tool.

Our recommendations

1. Increase AISI funding to $50 million per year to match its extensive mandate and be proportionate to the UK AISI, which has allocated GBP 66 million annually (AUD 132 million).

2. Require AI companies striking large data centre deals to commit to transparent information sharing with the AISI and make compute resources available for AI safety research, ensuring infrastructure investments serve national interests.

3. Establish an AIAT Grant Programme with $90 million in cornerstone funding, jointly administered by the National AI Centre and the AISI, to build Australia's AI Assurance Technology industry and capture value in a market forecast to reach USD 276 billion by 2030.