Home / Newsletter / May, 2026
AI Policy & GovernanceMay, 2026

AI Policy and Governance Newsletter — May 2026

An expert open letter urges Australia to use existing biosecurity powers to address AI-enabled bioweapons risk, Australia's AI strategy takes shape but frontier training remains an open question, Australia is shut out of Anthropic's Mythos defensive coalition, and CAISI signs pre-release testing agreements with Google, Microsoft and xAI.

May 2026 Newsletter

Monday 11 May 2026

Hi,

An expert-led open letter, signed by over 125 biosecurity experts, AI experts, parliamentarians, and members of the public, has urged Agriculture Minister Julie Collins to use her existing powers to block imports of synthetic DNA and RNA — which can be used to engineer pathogens — from providers that don't screen orders for dangerous sequences. A New York Times investigation revealed chatbots can describe how to acquire genetic material, synthesise pathogens, and disperse them in public spaces, and a fresh Anthropic benchmark showed Mythos Preview answering 82.6% of expert-authored bioinformatics questions.

Australia's broader AI strategy continues to take shape. Assistant Industry Minister Andrew Charlton's "Sliding Doors" speech argued Australia must build, not just use, AI. How frontier-model training fits into Australia's strategy is unclear. New YouGov polling commissioned by Good Ancestors found 61% of Australians support arrangements that facilitate AI training here while compensating creators.

Independent testing has confirmed the cyberattack capability Anthropic warned about with Mythos; OpenAI's GPT-5.5 reached comparable capability on UK AISI's 32-step benchmark; and unauthorised users gained access to Mythos via a third-party vendor on day one. In response, the US Center for AI Standards and Innovation signed pre-release testing agreements with Google, Microsoft, and xAI. The White House is reportedly considering executive action to make pre-release reviews the US standard. No Australian organisation is named as participating in Anthropic's Project Glasswing.

Welcome to the AI Policy and Governance newsletter from Good Ancestors. We track the biggest developments in AI policy and safety, at home and abroad.

Featured Australian publications

News & commentary

Open letter calls on Australia to reduce AI-enabled bioweapons risk — new legislation not needed

An expert-led open letter signed by over 125 biosecurity, AI, and policy experts has urged Agriculture Minister Julie Collins to use existing Biosecurity Import Conditions (BICON) powers to require synthetic nucleic acid providers to screen orders for "sequences of concern" before exporting to Australia (a power available now without new legislation). Janet Egan, deputy director of technology at the Centre for a New American Security, told the AFR: "What we're seeing is that leading models and specialised AI models are actually overcoming a lot of those barriers to entry into this field."

The letter cites the International AI Safety Report 2026 – which found general-purpose AI systems can provide expert-level guidance on biological and chemical weapons development – and a Cornell study finding a publicly available AI model outperformed 94% of virologist experts at troubleshooting protocols. The capability of AI to boost dangerous skills was again in the spotlight when a New York Times investigation published transcripts of AI chatbots describing how to acquire genetic material, synthesise pathogens, and disperse them in public spaces. Stanford's David Relman said one chatbot identified a security lapse in a public transit system and outlined a plan to maximise casualties, and MIT's Kevin Esvelt got Gemini to rank pathogens by economic damage to UK livestock industries. Dr Cassidy Nelson, a letter signatory and director of biosecurity policy at the Centre for Long-Term Resilience, described current safeguards as "a flimsy wooden fence that is easy to overcome". A day later, Anthropic released BioMysteryBench, an expert-authored bioinformatics benchmark on which Mythos Preview (Anthropic's latest unreleased model) reached 82.6% accuracy on the human-solvable set.

In response, Minister Ayres' office pointed to a multi-agency working group on synthetic biology and AI; Minister Collins' office cited $2 billion Labor had invested in biosecurity resourcing since 2022.

Comment:

Neither Minister has formally responded to the open letter's call for action — to reduce risk of bioweapons development by mandating gene synthesis screening. Australia has committed under the Biosecurity Act to reducing biosecurity risks to 'a very low level'. Unscreened synthetic nucleic acid imports do not meet this standard, given what AI assistance makes possible. The Department could begin adding a new BICON condition today without new legislation.

Screening is conducted digitally by the synthesis provider before dispatch, so the compliance burden falls on overseas exporters, not Australian researchers or the regulator. Providers that do not currently screen would face a choice: adopt screening or lose Australian business — making it more likely they screen all orders by default, not just those destined for Australia. Australia acting also creates a precedent that strengthens the international case for equivalent measures elsewhere.

When Government ruled out AI-specific legislation, it said existing laws are sufficient and existing regulators are best placed to act. That argument only holds if those regulators act.

Australia's AI strategy is taking shape — frontier training remains an open question

Assistant Industry Minister Andrew Charlton's "Sliding Doors" UTS speech argued that Australia must build, not just use, AI: "Markets alone do not decide a nation's destiny. Choices do." The AFR reported a blitz "to stop AI profits flowing to Silicon Valley".

Charlton emphasised backing Australian AI capability in sectors like healthcare, agriculture, and advanced manufacturing. Charlton has been name-checking applied-AI firms like Sydney customer-support startup Lorikeet (software wrapping frontier models), which he noted ranks eighth globally for start-up AI spend, and narrow-AI firms like Harrison.ai, whose specialised medical-imaging models are now used by half of all Australian radiologists and across more than 40 NHS Trusts.

The other two parts of the opportunity strategy involve physical infrastructure and attracting investment. Microsoft has signed an MOU with the Australian Government and announced A$25b in Australian AI inference infrastructure investment by 2029.

The larger opportunity is training. While plans for Project Meridien, a 240-megawatt (eventually 1-gigawatt) Kimberley data centre co-owned by the Karajarri Traditional Owners, were also unveiled to "support large-scale AI training, including systems like ChatGPT", training frontier models in Australia remains elusive. The Government's response to the Senate AI inquiry rejected a text-and-data-mining (TDM) copyright exception, choosing instead to pursue licensing through the Copyright and AI Reference Group, but no alternative has been proposed.

CNAS's Janet Egan argued that with global chip supply constrained, an Australian buildout on clean energy could displace dirtier compute elsewhere — making Australia a "data-centre superpower". But AI companies are unlikely to license Australian material directly if doing so would undermine the US fair-use doctrine that has allowed them to develop AI. YouGov polling commissioned by Good Ancestors found 61% of Australians support enabling AI training under arrangements that compensate creators, while only 15% want to keep laws unchanged. Meanwhile, Australia's largest superannuation fund has taken stakes in OpenAI, Anthropic and xAI — Australian retirement capital flowing to where AI training actually happens.

Comment:

Many narrow-AI models continue to outperform frontier general models, but many have been overtaken. Every leap in frontier AI performance creates a new graveyard of AI startups that were trying to stay ahead of the frontier in a given niche. Supporting narrow and sector-specific AI in Australia can be part of the plan, but it can't be the whole plan.

Government also needs a plan in case the frontier keeps moving quickly. While many countries will host the inference compute that powers frontier AI, frontier AI training compute will be far more centralised and may be the best single opportunity for Australia. Australia has abundant clean energy, available land, and political stability, and training isn't hampered by distance.

Copyright is the elephant in the room. Currently, training AI in Australia would require licensing the global corpus of copyright material, an unrealistic bar when the same material is available free elsewhere. Government has yet to publicly acknowledge this red line and has ruled out TDM with no feasible alternative.

Polling suggests public support for a model that compensates creators without requiring copyright licensing. This could look like AI companies paying a fee into consolidated revenue in exchange for a permit providing a statutory authorisation to use copyright material for AI training purposes. The permit could also require AI companies to meet the data centre expectations and other national interest requirements. Government could run a separate public benefit fund to meet the expectations of rights holders. Currently, Australia's national interest depends on two third parties reaching an agreement. Government should step into the middle and reach the best possible deal with each group separately.

Australia shut out of Anthropic's Mythos defensive coalition

In the month since Anthropic disclosed Claude Mythos Preview — the model it called too dangerous to release publicly — independent testing has confirmed the model can autonomously discover and exploit cyberattack vulnerabilities at scale. The UK AI Security Institute found Mythos Preview to be the first model able to complete its 32-step corporate network attack simulation end-to-end, a task it estimates takes a human professional around 20 hours. Mozilla applied Mythos to Firefox; the company's release of Firefox 150 includes fixes for 271 vulnerabilities surfaced during that initial evaluation, and Mozilla wrote that "defenders finally have a chance to win, decisively."

However, the dangerous capability has not stayed contained. Bloomberg reported unauthorised access to Mythos on day one through a third-party vendor environment, with the group continuing to use the model since. The NSA is now testing Mythos against Microsoft products, with staff describing themselves as "impressed by its speed and efficiency". Hot on the heels of Mythos, OpenAI's GPT-5.5 reached comparable cyber capability, becoming the second frontier model to solve UK AISI's 32-step simulation.

Anthropic's Project Glasswing — the defensive coalition formed alongside Mythos to harden critical software against these capabilities — has expanded to about 50 vetted partners, all US-domiciled or US-headquartered, with no Australian organisation named, IDM reports. India, where Finance Minister Nirmala Sitharaman has called the Mythos threat "as big as war," is publicly seeking equitable access for its critical infrastructure. OpenAI has launched a parallel programme — Trusted Access for Cyber — using tiered identity verification rather than invitation-only partnerships, which IDM notes may offer Australian security professionals a more realistic route to frontier-AI defensive access than Glasswing.

Comment:

We asked last edition whether the Australian Government would have early access to harden public-facing systems like MyGov or Medicare against Mythos-class threats. The public answer is "no". Australian critical infrastructure stays exposed for as long as it takes these capabilities to leak (already happened) or be replicated by less-restrained actors.

Australia's strategy, to the extent we have one, leans on bilateral MoUs. Anthropic's is a non-binding statement of intent promising "technical exchanges" with the AISI — not early access for critical infrastructure operators. The Slay review found the SOCI regime "ill-equipped" for AI-related risk, and Good Ancestors' submission argued general-purpose AI models are themselves becoming critical infrastructure.

The National AI Plan is considering whether to plan for an AI crisis. The release of new models with dangerous capabilities is exactly the kind of thing that should trigger an AI crisis plan and activate the National Coordination Mechanism. Organisations with equities (such as critical infrastructure and essential government services) need to get around the table with AI companies and AI experts to understand the new risk, how they can manage it, and what will happen if they don't. Currently, Australia has all the risk, no mechanisms to understand it, and no levers to address it.

CAISI signs pre-release testing agreements with Google, Microsoft and xAI

The US Center for AI Standards and Innovation (CAISI) — formerly the US AI Safety Institute, rebranded under the Trump administration — signed pre-release testing agreements with Google, Microsoft, and xAI on 5 May. Under the deals, the three companies will give the US government early access to unreleased frontier models for evaluation on cybersecurity, biosecurity, and chemical-weapons risks before public deployment. CAISI Director Chris Fall said "independent, rigorous measurement science is essential to understanding frontier AI and its national security implications". Microsoft separately signed a similar agreement with the UK AI Security Institute the same day.

The day before, the New York Times reported that the Trump administration is considering executive action to mandate similar pre-release reviews across the US AI sector, citing Anthropic's Mythos as the catalyst — though the administration has characterised the reporting as "speculation". President Trump himself, asked on Fox Business on 15 April whether AI needs safeguards or a "kill switch", answered "there should be".

Comment:

The Trump White House is reportedly considering an executive order to make pre-release review the standard. Australia's MoUs with Microsoft and Anthropic are non-binding statements of intent — not structured access to specific models for specific risks. Australia's AI Safety Institute should urgently pursue arrangements that let us understand and address risks.

Australia's failure with Mythos has exposed a significant weakness that we need to address before more capable models with more risks are released.

In case you missed it…

Featured opportunities

That's all, for now!

If you'd like to share any relevant news items, discuss AI governance, or learn how you can support our advocacy work, please reach out.

Onward in action!

The Good Ancestors team

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.