Emerging AI Challenges - Australian Code of Practice for App Store Operators and App Developers

Background

The Australian Government is developing a Code of Practice for app stores and developers, building on the UK Government's existing framework. However, substantial advances in Artificial Intelligence since the UK Code's last update in 2023 mean there are specific ways Australia could improve the UK model to address emerging AI risks.

Australia's cyber security strategy acknowledges that "Artificial intelligence and machine learning will bring new kinds of risk" and commits to promoting the safe use of emerging technology. A new Code of Practice presents an opportunity to deliver on those commitments by going beyond conventional application security to address the unique challenges posed by AI-powered applications.

AI risks create new challenges for app security

The UK AI Security Institute reports that criminal misuse of advanced AI is already here, with AI being used to support cybercrime, social engineering, impersonation scams, and other malicious activities. Three key AI capabilities are advancing rapidly and creating new security challenges:

  1. Multimodal generation: Realistic audio, video, and images can now be generated with minimal effort, enabling more convincing deception and abuse
  2. Advanced planning and reasoning: AI can design and adapt sophisticated attack strategies when combined with search capabilities
  3. AI Agents: AI systems that can take autonomous actions may enable persistent, large-scale criminal activity without human oversight

Our submission

Good Ancestors identified seven critical AI risks that are not covered by the UK model but should be addressed in an Australian Code of Practice:

Risk Category Key Challenge
Unauthorised Agent Action AI agents acting beyond their granted authority, making unauthorised purchases or communications
Alignment Faking AI systems deceptively appearing to follow instructions while engaging in other behaviours, such as Anthropic's Project Vend fabricating email exchanges
AI-Specific Cybersecurity Vulnerabilities Prompt injection, model inversion attacks, and other novel attack vectors, including incidents like OpenAI's March 2023 data breach where users accessed others' chat histories
Lack of Transparency of AI Capabilities Users unaware when interacting with highly persuasive or deceptive AI systems
Possession of Dangerous Information AI models trained on vast datasets inevitably containing instructions for harmful activities, with OpenAI reporting models approaching bioweapon development capabilities
Lack of Model Transparency Inability to assess app risks without knowing the underlying AI model being used
Unpatchable Open-Weight Models AI models that cannot be recalled or updated once released, creating permanent risk vectors

Australia should lead in AI-ready app security

While harmonisation with international standards is sensible, Australia has an opportunity to demonstrate global leadership by creating a code that is truly fit for the AI era. 94% of Australians think Australia should play a leading role in international AI governance and regulation. Our submission recommends that Australia either:

  1. Expand beyond the UK model to incorporate principles governing AI's unique challenges, or
  2. Include a roadmap to work with the UK on a joint review and update of the Code

Key recommendations

Our submission argues that effective AI governance requires addressing the fundamental differences between AI systems and traditional software:

  • Move beyond reactive security: Traditional vulnerability-patch cycles don't work with "unpatchable" open-weight models
  • Address probabilistic vs deterministic systems: AI integration means dealing with uncertainty and emergent behaviours, not just malicious code
  • Require AI capability disclosure: Users need to know when they're interacting with persuasive or deceptive AI systems
  • Mandate underlying model transparency: Risk assessment requires knowing which AI model an app uses
  • Implement AI-specific security measures: Protect against prompt injection, model data leakage, and other novel attack vectors

The submission emphasises that 94% of Australians think Australia should play a leading role in international AI governance and regulation. A simple adoption of the UK model would leave Australian consumers and businesses exposed to rapidly evolving AI risks.

Supporting international cooperation

Good Ancestors recommends that Australia offer to cooperate with the UK, including the UK AI Security Institute, to ensure harmonisation of an updated Code. The Australia-UK Free Trade Agreement and the Network of AI Safety Institutes could facilitate this cooperation.

The submission also notes that creating an Australian AI Safety Institute would facilitate future work at the crossover of AI and cybersecurity, supporting both domestic security and international leadership in AI governance.