New research by a QUT law researcher has highlighted the need for new legal definitions and frameworks to protect the public from the risks of automated decision-making like the robo-debt debacle.
- Administrative law which governs automated government decision-making systems was written for humans, not machines
- Laws may need to be reframed and recrafted to suit automated decision-making to enable judicial review and accountability
- Automated systems can apply flawed decision-making logic to a high number of decisions that significantly affect people’s lives
Associate Professor Anna Huggins, from QUT School of Law, said the $1.8 billion robo-debt class action was a stark reminder of the risks of automated decision-making without meaningful human input as government departments increasingly move towards automation.
“We need new legal solutions to maintain administrative law protections in automated decision-making (ADM),” Professor Huggins said.
“Administrative law (law which regulates executive government decision-making) provides an important framework to promote accountability for executive action and protection of individual rights.
“But administrative law rules were developed for decisions made by humans and not computers and automation has distinctive features that cannot be easily reconciled with administrative law rules.
“I have identified a range of potential disconnects between ADM and administrative law, developed for human-centric decision-making, which are limiting the options for both judicial review and executive accountability.
“This is concerning as ‘robo-debt’ has given us a compelling example of automated systems’ potential to replicate a flawed decision-making process on an unprecedented scale and cause significant distress and injustice for thousands of people.”
Professor Huggins said some aspects of administrative law might need to be reframed and recrafted to ensure they remained relevant in ADM.
“A key area of disconnection is the new types of bias associated with opaque ADM that are not easily accommodated by the bias rule – a rule by which a decision-maker must approach a matter with an open mind and without prejudgment – or other relevant grounds of judicial review,” she said.
“Developing computer programs requires myriad design choices which can reflect the conscious or unconscious biases of programmers.
“Machine-learning algorithms can perpetuate historical human biases in the datasets used to train them, such as race and gender discrimination.
“Whereas biased human decision-making processes likely affect a small number of cases, automated systems have the potential to apply a flawed decision-making logic to a very high number of decisions.
“The opaque nature of ADM processes and the general public’s inability to read or write computer code present major hurdles for individuals seeking judicial review and limits their ability to contest automated decisions.
“Moreover, the rule against bias seeks to prevent decision-makers exercising their power if there is evidence that they are actually or ostensibly biased.
“However, the phenomenon of ‘automation bias’ refers to humans’ susceptibility to defer to a computer program’s outputs due to a perception they are superior or even infallible.
“This type of bias can lead decision-makers to trust and accept the outputs of automated processes without further scrutiny despite the risk of errors due to flaws in the ADM system’s design, data inputs or underlying code.”
Professor Huggins said her study suggested that legal concepts of what constitutes a decision and other relevant features of administrative law should be reframed and recrafted to accord with the reality of ADM.
“Regulatory reform is also warranted to ensure that administrative law protections remain meaningful in an increasingly automated environment.”
‘‘ was published in the University of New South Wales Law Journal.