
Filling the Gap: Expert Consultation Highlights AI Regulatory Needs
Submission to the Productivity Commission’s 5 Pillars Interim Reports Consultation: Harnessing Data and Digital Technology
In September 2025, Good Ancestors made a submission to the Productivity Commission's inquiry on Harnessing Data and Digital Technology. Our submission addresses the Commission's call for gap analyses by presenting findings from the Australian AI Legislation Stress Test—an assessment of how current Australian laws would respond to five key AI threats.
Background
The Productivity Commission's interim report on Harnessing Data and Digital Technology called for a pause on mandating AI guardrails, comprehensive gap analyses of existing rules, and positioned AI regulation as a "last resort." It focused on AI benefits and how regulation could stifle innovation but provided no comparable assessment of potential AI-related costs or harms. It also didn’t account for how Australians' low trust in AI would slow adoption and limit economic benefit. This leads to an overly optimistic outlook that plays up benefits while playing down risks.
Meanwhile, AI development has outpaced Australia's consultation processes. Since the Government began consulting on AI safety in June 2023, AI models have crossed chemical, biological, radiological, and nuclear (CBRN) risk thresholds, with labs predicting Artificial General Intelligence as early as 2026.
Our submission
Our submission centres on expert findings from the Australian AI Legislation Stress Test, which surveyed experts across AI, public policy, cybersecurity, national security, and law. The stress test answers the Commission's request for gap analyses by examining how current Australian laws would respond to specific AI threat scenarios.
Key findings include:
- Across all threats assessed, the vast majority of experts found current Government measures inadequate
- Up to 93% of experts consider current measures inadequate for managing threats from general-purpose AI models
- Existing regulators are well placed to address many, but not all, AI risks
- 'Upstream' regulation at the model development level would be more efficient than 'downstream' regulation for addressing AI threats that transcend traditional regulatory boundaries
Limitations of The Commission's Approach
- The commission's approach would overlook evolving risks
- AI use cases and risks are constantly shifting. The Commission's approach of comprehensive gap analyses will only produce a snapshot. By the time analyses are complete, the risk landscape will have shifted.
- General-purpose AI undermines existing regulatory approaches. These systems operate across multiple regulatory domains, creating impractical compliance burdens and novel liability gaps that current law cannot navigate.
- We need AI-specific regulation
- Not all AI threats can be effectively managed via existing approaches. Novel AI risks like agents exceeding authority, misuse of open-weight models, and loss of control scenarios require new frameworks. AI also reduces expertise required for malicious acts and bypasses person-based and place-based regulatory chokepoints
- AI-Specific regulation can be technology neutral. Australia can draft technology-neutral AI regulations by defining systems based on functions or risk tiers, not specific technologies—just like the Therapeutic Goods Act.
- Developers need appropriate accountability. Without developer regulations, obligations fall only on Australian users who cannot manage risks of 'black box' systems.
- Lack of trust slows adoption and limits economic benefits
- Australians are less trusting of AI than most countries, with 96% holding concerns and only 36% trusting AI systems. The Tech Council of Australia shows delays to adoption have a dramatic economic impact (61% less value than fast-paced adoption). The Commission does not account for lack of trust when forecasting benefits.
Australia's Path Forward: Leadership and AI Regulation
- Why Australia must act now
- Regulatory delay becomes increasingly dangerous as AI capabilities advance rapidly. The Commission's comprehensive review could take years while AI models cross new risk thresholds.
- Australia should lead in global AI governance. 94% of Australians support leadership in international AI governance, but meaningful participation requires sovereign technical capability.
Our Recommendations
Acknowledge regulatory gaps
- Evidence demonstrates certain AI risks are likely, consequential, and unable to be effectively addressed by existing laws. Sufficient evidence exists to justify building governance structures now.
Create appropriate governance structures
- Australia needs three complementary elements:
- An AI Act providing technology-neutral legislative framework
- An AI regulator to enforce standards and coordinate across sectors
- An Australian AI Safety Institute for independent technical evaluations and safety research
- How could an AI Act work? A well-designed Act could create technology-neutral AI definitions, establish primacy of existing regulators, create an expert AI regulator, and allow adoption of international standards.
- Australia needs three complementary elements:
Include credible trust-building measures or reduce assumptions about the economic benefit of AI
- The PC should either recommend credible trust-building measures that would remove trust as a barrier, or revise down its forecasted AI adoption rates to reflect these non-economic factors.
Conclusion
Expert assessment demonstrates existing measures are inadequate for managing national-scale AI risks. A well-designed AI Act can target risks while supporting innovation, increasing the net benefit Australians achieve from AI.