Home / Newsletter / April, 2026
AI Policy & GovernanceApril, 2026

AI Policy and Governance Newsletter — April 2026

YouGov polling shows 61% of Australians back AI training that compensates creators, Anthropic discloses dangerous cyber capabilities in unreleased Claude Mythos, the SOCI Act review finds the regime ill-equipped for AI risk, NSW fast-tracks $52b of data centres, and Atlassian's 1,600 job cuts bring the AI workforce story home.

April 2026 Newsletter

20 April 2026

Hi,

First up, breaking news: a YouGov poll commissioned by Good Ancestors, reported exclusively by InnovationAus, found 61% of Australians support (vs 18% oppose) allowing AI training on Australian soil provided creators are compensated with 57% backing a library-style fund modelled on the Public Lending Right scheme. The government's copyright consultations are seven months in without resolution and this polling suggests the public is ahead of the debate.

Earlier this month Anthropic published the system card for Claude Mythos Preview (a model the company judges too dangerous to release). In testing, Mythos identified zero-day vulnerabilities in every major operating system and browser. Days later, the US Federal Reserve and Treasury summoned the CEOs of the largest US banks to discuss cyber risks.

Anthropic CEO Dario Amodei was in Australia a week earlier, meeting the Treasurer and the Prime Minister, opening Anthropic's Sydney office, and signing a Memorandum of Understanding with Government. Among other things, the MoU covers technical exchanges with the AISI, alignment with the Government's data centre expectations, and support for Australian startups, research and AI adoption. It's a non-binding agreement, but it signals interest from a top-tier lab and an opening for Australia.

In the same fortnight, the independent review of the Security of Critical Infrastructure (SOCI) Act found the regime ill-equipped for AI-related risk; NSW's fast-track of $52 billion in data centre projects; the Commonwealth's data centre expectations were labelled "vague" by industry; and Atlassian announced 1,600 job cuts as it pivots harder into AI.

Welcome to the AI Policy and Governance newsletter from Good Ancestors. We track the biggest developments in AI policy and safety, at home and abroad.

Featured Australian publications

  • Independent Review of the Security of Critical Infrastructure (SOCI) Act 2018 (Dr Slay): The first independent review of the SOCI Act finds it "complex, duplicative and ill-equipped" for emerging technological risks and calls for major change, including expanding coverage to AI infrastructure, offshore dependencies, data poisoning, and agentic AI.
  • Policy Settings for Responsible Use of Artificial Intelligence in Defence (Australian Department of Defence): Defence, excluded from the DTA's government-wide AI policy, releases its own binding AI policy, including risk-based controls across the ADF and portfolio.
  • Australian Government response to the Senate Select Committee on Adopting AI (DISR): Government's response to the November 2024 Senate Select Committee report mostly continues commitments in the National AI Plan, but signals willingness to act on copyright licensing arrangements, confirms that new regulations for transparent usage of automated decision-making will commence on 10 December 2026, and foreshadows an AI Accelerator CRC funding round.
  • Offensive Cyber Time Horizons (Lyptus Research): Australian research group finds offensive cyber capability is doubling every 5.7 months, with Opus 4.6 and GPT-5.3 now reaching 50% success rates on tasks that take human experts three hours. The findings provide useful quantification for debates on AISI evaluation priorities and SOCI scope.

News & commentary

Anthropic discloses offensive cyber capability in unreleased Claude Mythos model

🌍 Tech, 🤝 Principles

On 8 April, Anthropic published the system card for Claude Mythos Preview model. During internal testing, Mythos identified and exploited zero-day vulnerabilities in every major operating system and browser, and wrote working exploits in hours that penetration testers said would take weeks. Anthropic describes Mythos Preview as posing "the greatest alignment-related risk" of any model it has built, and says exploit-writing "emerged as a downstream consequence of general improvements in code, reasoning and autonomy"—not deliberate training.

In another test, Mythos escaped its sandbox and messaged the researcher as directed. Then, without being asked, the model developed a further exploit giving it broad internet access and posted it publicly.

Rather than release the model, Anthropic launched Project Glasswing, a private access program for defenders at AWS, Apple, Google, Microsoft, NVIDIA and others. Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent summoned the CEOs of major US banks to discuss cyber risks.

In Australia, Lee Hickin, Executive Director of the National AI Centre, publicly flagged concern, asking whether AI has become powerful enough to break the norms of how we use the internet. Australian research group Lyptus Research (see Featured Publications) put numbers on the trajectory: offensive cyber capability doubling every 5.7 months, with Opus 4.6 and GPT-5.3 now at 50% success rates on tasks that take human experts three hours.

Comment:

Project Glasswing is a more careful response than AI developers have previously taken. It also shows how little leverage governments have over decisions that impact their interests. Anthropic chose when to announce, who to give early access to, and how long they have to prepare.

In some ways, Mythos is a hostage situation. Highly capable AI creates a threat, and highly capable AI is the only defence. A small number of providers may get early access to attempt hardening their systems before the capability is released more broadly – either by Anthropic or by someone else. We will see if the Australian Government has this early access and if it has enough time to harden public-facing systems like MyGov or Medicare. We'll also see how the long tail of Australian businesses fares.

For cybersecurity, there's some offence-defence balance that early access might tip. But when AI models cross equivalent thresholds in biological capability, self-improvement capability or other frontier risks, it's unclear what the Project Glasswing equivalent would look like.

SOCI Act review finds AI infrastructure risk is slipping through the cracks

🧾 Legislation, ✍ Regulation

The first independent review of the Security of Critical Infrastructure (SOCI) Act 2018 was tabled on 24 March. The review, led by cybersecurity academic Dr Jill Slay, found the Act "complex, duplicative and ill-equipped" for the geopolitical and technological risks it now needs to manage. AI-related critical infrastructure risk is flagged as a significant gap. Slay recommends "major legislative change", including expanding coverage to "all AI infrastructure and services", which includes offshore dependencies, agentic AI, content delivery networks and satellite dependencies.

A day later, Home Affairs Minister Tony Burke released a discussion paper proposing a 'high-risk' technology ban for critical infrastructure and reforms to ministerial directions powers, with the first tranche of SOCI reforms out for public consultation.

Comment:

The review strengthens a claim Good Ancestors and others have been making for years: that Australia's "existing laws are enough" position does not survive contact with frontier AI. Good Ancestors' submission to the review argued two related points: that data centres training general-purpose AI models likely fall outside SOCI coverage because the Act's definitions predate that workload, and that general-purpose AI models are themselves becoming critical infrastructure. Anthropic's Mythos disclosure shows how one developer's decisions can ripple through banking, healthcare, communications and government services. The Productivity Commission's "last resort" framing for AI-specific regulation, and Government's own stance that existing regulatory frameworks can be adapted, are harder to sustain when a statutory review commissioned by Government concludes the key piece of critical-infrastructure legislation is not fit for purpose.

NSW fast-tracks $52b of data centres while Commonwealth expectations land "vague"

✍ Regulation, 🤝 Principles

On 27 March, NSW's Investment Delivery Authority signed off on 15 data centre projects worth $52b, rejecting a further $40.7b as "premature or overly speculative". NSW Treasurer Daniel Mookhey framed the approach as a "race to the top", pairing fast-tracking with consultation on tougher energy, water and local-content principles.

Industry Minister Tim Ayres, Energy Minister Chris Bowen and Assistant Minister Andrew Charlton released expectations of data centres and AI infrastructure developers. The Government says proposals not aligned with the expectations will not be prioritised. Data Centres Australia's Belinda Dennett welcomed the framework but said the lack of clear compliance criteria "potentially disincentivises investment".

Google is reportedly withholding $20b in planned Australian investment over tax concerns, fearing a local data centre hub would trigger "permanent establishment" status and a 30% corporate tax rate across all its Australian operations. The Australian Energy Market Operator introduced new ride-through rules targeting grid risk from data centre loads, which are expected to triple by 2030. David Swan reported in the SMH that for every $100 of hyperscale data centre investment, $70–80 leaves Australia almost immediately—flowing to semiconductor makers in Taiwan, server manufacturers in the US and cooling equipment suppliers in Europe.

Writing in The Strategist, CNAS's Janet Egan argued Australia could become a "data-centre superpower": its land, clean-energy resources and skilled trades allow it to build quickly, and because AI chip supply is constrained, new Australian capacity would displace dirtier compute elsewhere. Egan also argued a substantive Australian presence would give AI labs stronger incentives to engage with the AISI, intelligence and defence agencies, warning that "opting out doesn't mean opting out of [AI's] consequences"—it just means "losing the tools, expertise, and leverage needed to prepare and respond".

Comment:

The principles themselves are reasonable, but criticism that they lack specificity is fair. There are also questions about whether administrative decision makers working under specific laws can lawfully "deprioritise proposals" based on a separate policy statement. If the outcome the federal government wants is to give green lights and red lights to AI data centres, the best approach is to do that directly.

With power, water and other issues partly addressed, the key gap now is copyright. Without a fair-use or text-and-data-mining exemption, AI companies have little reason to train models in Australia when they use the same data for free elsewhere. Creating a path for data centres is good, but currently, the path only leads to inference, not training. Australia could be left with little leverage or standing to shape the future of AI in our national interest.

Atlassian's 1,600 job cuts bring the AI workforce story home

🌍 Tech, 🤝 Principles

On 9 April, Atlassian announced it would cut around 1,600 jobs (roughly 10% of its workforce) as it pivots harder into AI. Bendigo Bank signalled AI and outsourcing would cut costs by 10%, and Life360 disclosed AI-led job cuts despite previously assuring the market it would avoid mass layoffs.

A Randstad survey found one in three Australian workers fear their job will disappear within five years. David Braue reported in Information Age that AI is being blamed for every Australian tech sacking this year. Elizabeth Knight wrote in The Age about a "race to the bottom" on AI job cuts. Toby Walsh, long cautious about displacement claims, has shifted, writing that "this time is different… AI will, in the longer term, come after many other jobs. Customer support, human resources, legal work and consultancy are likely to be next". CBA chair Paul O'Malley warned AI could erode Australia's tax base if white-collar jobs move offshore.

Comment:

The Deloitte figures on AI uplift assume the productivity gains accrue broadly to Australians. That is not the default path. Instead, displacement is likely to concentrate first among junior, process-driven and commodified roles, while capital returns accrue to a much narrower group. Without deliberate tax, training and income-support settings, Atlassian-scale restructures hand the gains to shareholders while leaving workers with the transition costs.

The claims made by AI companies are stronger still. They claim to be building AI models that can perform all the valuable cognitive work humans can. While it's right to be sceptical of claims like this, the government should also have a plan in case they come true. Currently that conversation isn't being had.

In case you missed it…

Featured opportunities

That's all, for now!

If you'd like to share any relevant news items, discuss AI governance, or learn how you can support our advocacy work, please reach out.

Onward in action!

The Good Ancestors team

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.