Automated Decision-Making

Background

Following the Government's Safe and Responsible AI consultation in September 2024, specific areas of AI policy were referred to lead departments. The Attorney-General's Department was tasked with developing a framework for automated decision-making (ADM) by Government, particularly in the context of administrative law decisions.

The consult also connects with relevant recommendations from the Robodebt Royal Commission, which called for the Australian Government to establish a consistent legislative framework for automation in government services. While ADM has been used in government for many years, the rise of generative AI and its rapidly increasing capabilities makes this framework crucial for the future of government service delivery.

Our submission

Our submission argued that public servants need the technical tools necessary to provide meaningful oversight of AI. Just as engineers rely on precision tools to validate machined parts—rather than their unaided eyes—officials overseeing AI-aided decisions need appropriate tools.

The need for support tools only becomes more important as AI becomes more powerful.

We made three recommendations:

  1. Mere "Human in the loop" is not sufficient to provide meaningful oversight of AI systems, especially as they become more capable. Any ADM Framework cannot just be about deploying human-skill in particular ways.
  2. We need scalable oversight tools that can grow with AI's capabilities. Government has a legitimate interest in market shaping to support scalable oversight. Government should signal in the ADM framework that it intends to purchase the best scalable oversight tools that the market can provide.
  3. Establishing an Australian AI Safety Institute is necessary to provide Government technical expertise on scalable oversight, and to deliver on commitments like Recommendation 17.2 of the Robodebt Royal Commission and the Seoul Declaration on AI Safety.