International AI Safety Report
International Scientific Report on the Safety of Advanced AI
Key Resources
Background
The 2026 International AI Safety Report is a landmark effort to synthesise the global scientific understanding of AI risks. The Report provides decision-makers in government and beyond with a shared and authorative global picture of AI's risks and impacts.
Good Ancestors Policy was one of a handful of civil society organisations globally that contributed to the Report as a reviewer. AI safety science is the cornerstone of our work, and we are proud to have contributed to this effort. We are available to brief on the Report's findings and what they mean for Australia.
Key findings
AI capabilities are advancing rapidly
AI capabilities have continued to improve, especially in mathematics, coding, and autonomous operation. AI companies are investing heavily in "AI agents" — autonomous systems that can perform tasks with little to no human oversight. These advances are expanding the range of tasks AI can perform and raising new questions about oversight and accountability.
Biological weapons risks
Advances in AI's scientific capabilities have heightened concerns about misuse in biological weapons development. AI systems can provide detailed information relevant to biological weapons development. AI agents can complete complex tasks, including providing accessible interfaces to help users operate specialised laboratory equipment. This is an area where emerging science demands urgent policy attention.
Cyber security threats
AI systems are increasingly used in real-world cyber operations. They can discover software vulnerabilities, write malicious code, and automate parts of cyberattacks. As AI capabilities improve, the scale and sophistication of AI-enabled cyber threats is likely to grow.
Safety testing is getting harder
AI models can sometimes identify when they are being evaluated, allowing them to deceive testers. This has made safety testing harder and raises fundamental questions about our ability to assess the true capabilities and alignment of advanced AI systems.
Loss of control
AI systems could pursue goals that conflict with human interests. Some AI researchers and company leaders believe loss of control is a serious possibility, with consequences potentially including human extinction. Others consider such scenarios implausible. This disagreement reflects different assumptions about what future AI systems will be able to do, how they will behave, and how they will be deployed.
Information asymmetries
Information asymmetries mean that policymakers often lack the information they need to make good decisions about AI. AI developers have information about their products — such as training data and evaluations — that they often do not share with policymakers and researchers, limiting external scrutiny.
What this means for Australia
The Report is about the science, so it makes no policy recommendations. Good Ancestors can fill this gap, offering perspectives on what the science means for Australia. More broadly, the Report underlines that Australia needs robust, science-informed AI governance.