DISR, COMMONWEALTH

Safe and responsible AI

Background

Minister Husic and the Department of Industry, Science and Resources launched a “safe and responsible AI” consultation to seek views on how the Australian Government can mitigate potential risks of AI. The consultation was supported by a paper from Australia’s Chief Scientist about the risks and opportunities of generative AI.

GAP thinks that how governments around the world handle rapidly progressing AI is pivotal to safeguarding a flourishing future for all. Reading the Government’s papers, we were concerned that Government may be disregarding the potential for existential and catastrophic risks in pursuit of economic benefit.


Our submission

GAP made a submission calling for Government to:

  1. Acknowledge potential catastrophic and existential risks from artificial intelligence
  2. Broaden its risk-managment efforts to cover pressing risks that are causing harm today (e.g., misinformation, algorithmic bias), the intensification of those risks (e.g., personalised scams and deepfakes) and novel sources of risk (e.g., the misuse of dangerous emergent capabilities or “rogue AIs”)
  3. Lead and participate in international efforts and agreements focused on identifying and mitigating the full range of AI risks
  4. Build AI safety capability in Australia, including by funding AI safety research, establishing an Australian AI lab to assess and build safe AI models in Australia, and improving AI expertise in Government, and
  5. Ensure Australians have access to justice by creating a legal system that appropriately holds people accountable for harms caused by AIs.


Supporting the community

GAP understands that broad alliances are required to tackle the world’s most pressing problems. With that in mind, GAP volunteered its support and expertise to Australians for AI Safety. Australians for AI Safety wrote an open letter to Minister Husic explaining that the economic potential of advanced AI systems will only be realised if we make them ethical. The letter also set out a range of practical actions the Australian government could take to further AI safety. GAP’s CEO, Greg Sadler, acted as a spokesperson for Australians for AI Safety and worked with signatories to get the message to the public. You can read and listen to relevant stories here:

Finally, GAP believes in the importance of democracy and empowering the public to engage in the policy conversations that matter most. GAP helped empower Australian community members who care about risks from AI to develop their own submissions to the process, through workshops across Australia and online. The workshops helped community members understand the process, provided them with high-quality evidence based on the best international research, and helped them craft submission that best articulated their personal beliefs.