
Australian AI Legislation Stress Test: Expert Survey
Expert analysis of five AI threats shows critical gaps in Australia's regulatory framework
Executive Summary
Based on input from 64 experts with expertise spanning AI, public policy, cybersecurity, national security, and law, this report evaluates five artificial intelligence (AI) threats, assessing their risk and the adequacy of Australia's current laws to provide recommendations for policymakers and regulators. Experts evaluated the following AI threats:
Unreliable Agent Actions: An AI agent incompetently pursuing an intended goal, causing harm through errors, deception, or fabrication.
Unauthorised Agent Actions: An AI agent competently pursuing an unintended goal, causing harm by exceeding user control or authority.
Open-Weight Misuse: The adaptation of publicly released AI models for malicious use by removing built-in safety features.
Access to Dangerous Capabilities: AI models providing access to specialised knowledge, such as how to create biological, chemical, or cyber weapons.
Loss of Control: An AI system escaping human control through mechanisms like self-replication or recursive self-improvement.
Risk assessment
Experts separately assessed the likelihood of each threat causing 'Moderate' or greater harm (>9 fatalities, >18 casualties, or >$20m AUD economic cost) in the next 5 years, and the potential severity of that harm if it were to occur.
Open-Weight Misuse and Unreliable Agent Actions were rated as the most likely to occur, with a median evaluation of 'Likely or Probable'.
Loss of Control was rated as the most dangerous. If it were to occur, its median assessed impact was 'Catastrophic' (>1,000 fatalities, >2,000 casualties or >$20b AUD economic cost).
Adequacy of current government measures
Experts assessed the adequacy of current Australian Government measures for mitigating each AI threat.
Across all threats, the vast majority of experts found existing measures to be inadequate.
The measures for managing Loss of Control were considered the least adequate, with over 93% of experts rating them as inadequate.
Legal analysis and regulatory recommendations
Legal analysis shows that AI is not entirely unregulated. At least some relevant laws were identified relating to each AI threat. The report identifies specific ways these existing laws could be improved to better address AI-related harms.
However, many AI threat scenarios highlight risks from general-purpose AI that no specific regulator is tasked to address. For these threats, "chokepoint" or “upstream” regulation that ensures general-purpose AI models have appropriate safeguards would be more efficient and effective than "downstream" regulation that attempts to address every specific way these general technologies could cause harm.
Overall, this report finds that increasingly capable and general-purpose AI poses risks on a national scale and that existing regulators are not well placed to address general AI risks. The identified threats from general-purpose AI systems transcend regulator boundaries, requiring coordinated, upstream intervention to mitigate effectively and efficiently. Expert analysis justifies new laws targeting these five national-scale threats. We encourage Departments and regulators to use these AI threats and scenarios for their own detailed stress-testing, and the authors would welcome the opportunity to collaborate on this important work.
We thank the 64 experts who contributed their expertise to this report, spanning AI research, public policy, cybersecurity, national security, and law. We also acknowledge Australians for AI Safety for their support of this research.
Media : Contact media@goodancestors.org.au or read the Australians for AI Safety press release: ‘As Government Splits on AI Rules, New Report Reveals Critical Gaps’.
Published: 19 August 2025
Authors: Greg Sadler, Emily Grundy, Luke Freeman, Nathan Sherburn
Contact: If you would like to discuss the report or propose further research, please let us know at contact@goodancestors.org.au