
Australian AI Policy White Paper
AI progress will accelerate during this term of Government, bringing new opportunities, risks and disruptions. Markets forecast that models with human-like cognitive capabilities will be developed during this term of government. Even if these forecasts are optimistic, increasingly capable AI models will continue to shape society.
AI could be transformative
this term of Government
“Possibly by 2026... the capabilities of AI systems will be best thought of as akin to... a ‘country of geniuses in a datacenter’—with the profound economic, societal, and security implications.”
CEO, Anthropic
AI is outperforming humans, acting autonomously
AI models are rapidly approaching and exceeding human capabilities across a range of tasks. Models released in 2023-2025 have surpassed human performance in areas like reading comprehension, language understanding, and problem-solving.
AI's ability to complete tasks autonomously is increasing dramatically, with the length of tasks AI can handle doubling approximately every 7 months.
Australians are worried,
want bold governance
Australians more concerned than most populations about AI risks, with many focused on preventing catastrophic outcomes. Australians overwhelmingly want the government to establish new regulatory bodies and believe Australia should take a leadership role in global AI governance rather than waiting for other countries to act.
36%
Willing to trust AI¹
86%
Support new regulatory body³
78%
Concerned about negative outcomes²
94%
Australia should lead on governance⁴
Building credible trust
unlocks opportunity
Faster AI adoption would create substantially greater economic value for Australia than slow adoption, yet public mistrust is slowing adoption. Building credible trust through meaningful safety measures is essential to unlock AI's full potential for Australia.
2.6x opportunity from higher trust
“Trust is a critical driver for AI adoption. If people do not trust AI, they will be reluctant to use it. Such reluctance can reduce demand for AI products and hinder innovation”
The University of Sydney
By clearing the decks in 2025 we can be ready for what’s coming
Launch an Australian AI Safety Institute
Build domestic expertise, assess risks, develop tools, and aid regional partners.
Attract global talent to the expert panel
Proactively recruit leading Australian AI experts working overseas to bring world-class perspectives.
Introduce an Australian AI Act
Provide regulatory clarity with flexible regulations for defining high-risk systems, focusing on transparency and accountability.
Update the AI Safety Standard
Strengthen the standard beyond general principles to provide specific safety guidance for advanced AI systems.
Host the next AI Safety Summit
Participate in the upcoming Indian Summit and bid to host the next one, refocusing the agenda on core safety challenges.
Implement Courageous Industrial Policy
Invest strategically in AI compute infrastructure, leveraging Australia's clean energy advantages for future AI positioning.
2026 and beyond?
Prepare for rapid AI advancements like sophisticated AI agents handling complex tasks and reshaping the job market. New risks will emerge from misuse of AI tools, while powerful AI corporations might become dominant, potentially hiding their cutting-edge capabilities. This era will also see escalating digital and physical risks, potentially culminating in the development of Artificial Super Intelligence, marking a profound shift for humanity.
Future-Proof Australia
Powerful AI systems will have wide reaching impacts for every part of our society, requiring a whole of government focus. Getting Australia ready for an AI world will require more resilient regulatory frameworks, a national AI cybersecurity uplift and thinking about tax and welfare in an AI-drive economy
Global AI Treaty to share benefits and manage risks
The challenges of advanced AI transcend national borders. AGI will destabilise the global economy and geopolitics. A treaty could cap AI capability so we don't build dangerous AI systems that dramatically exceed human cognitive capabilities, share the benefits of AI, and ensure AI companies don't become more powerful than countries.