Add a bookmark to get started

abstract architecture
31 January 20249 minute read

AI Regulation in Australia

What we know and what we don't

Earlier this month, the Australian Government, via the Department of Industry, Science and Resources, announced its intention to implement a suite of mandatory safeguards in relation to the development and deployment of high-risk artificial intelligence (AI) use cases, via its interim response to its 2023 consultation paper “Safe and responsible AI in Australia” (Paper) (see our related alert here), in which it set out its broad scheme for the regulation of AI in Australia.

 

What we know

The Government will adopt a risk-based approach to the regulation of AI and seeks to prevent the harms associated with AI use, by regulating the development, deployment and use of AI in high-risk contexts only, with other, lower-risk forms of AI being allowed to “flourish largely unimpeded”. To do this, the Government plans to develop its suite of mandatory requirements, through further public and industry consultation and a to-be-established advisory body, which will centre (at least initially) around the themes of:

  • testing and audit, including in relation to product safety and data security;
  • transparency, including in relation to public reporting obligations, and disclosures regarding model design, use of data and watermarking of AI-generated content; and
  • accountability, including in relation to organisational roles and responsibilities and training requirements.

The Government will also take immediate interim measures to ensure AI safety, including developing an AI safety standard and options for the watermarking of AI-generated content (albeit that compliance with the standard or any sort of watermarking conventions would be voluntary in the first instance).

It is also worth noting that the Government’s further consultation will occur in conjunction with current domestic legal and regulatory reviews, and legislative reforms (the Privacy Act review, copyright enforcement review, enactment of online safety and misinformation legislation, etc); it will be interesting to see the extent to which the road to AI regulation in Australia will be influenced accordingly.

 

What we don’t know

While the Government has provided a high-level view as to what the future of AI regulation in Australia may look like, its response to the Paper is silent in some areas that what we would expect to see addressed in an extensive regulatory approach, including the Government’s position in respect of AI-related policy, legislative or budgetary matters.

Accordingly, it does not yet appear that parties on either side of the “AI regulation fence” have sufficient information to allay their concerns that AI regulation will, on the one hand, stifle innovation and compromise Australia’s ambition as a leading global tech innovator, or on the other hand, be impotent in protecting Australians from the potential harms of AI; so what else do we need to know?

AI-specific regulatory treatment

The introduction of AI-specific legislation was a topic on which opinion was heavily divided in submissions to the Paper. While the Government has now confirmed that it will implement AI-specific laws (to implement the mandatory requirements around the themes noted above), the other details remain unclear, including:

  • the form in which the AI laws are implemented, which affects how easy it will be to change the laws;
  • who will enforce these laws; and
  • how quickly it will be passed and implemented, and the journey that Europe has been (and is) on with the EU AI Act has illustrated how long this can take.

The Government’s response also reflects the wider recognition that comprehensive regulation of AI is difficult, not least because of the constantly evolving nature of the technology. This raises the question: how best to future-proof AI regulation? How does one design a regulatory regime that provides certainty and consistency for those it protects, while also being sufficiently flexible and adaptable to respond to changes in technology? Despite efforts in other areas of legislation and regulation (notably those affecting privacy and a range of telecommunications services) to adopt “technology neutral” approaches, the reality is that it is difficult to cover all possibilities, but certainly we consider that it no reason not to develop and implement laws that seek to regulate novel and developing technologies.

There also appeared to be no mention of any plans for a dedicated regulator, which makes one wonder whose jurisdiction AI will fall under. Will there be a supreme regulator, or will an oligarchy of regulators representative of particular AI-related risks (for example, privacy), or even sectors (for example, financial services) be in joint-charge of how AI interacts with the regulated population? Then what of inconsistency between regulatory approaches across sectors?

A gap-filling exercise?

Submissions on the Paper (of which there were over five hundred) brought to light ten separate legal and regulatory frameworks that the Government later acknowledged (in its response) are not fit-for-purpose. However, despite signalling its intent to look into various frameworks (for example, privacy and online safety), it remains unclear as to how these frameworks will be amended to deal with the raft of issues presented by AI technologies, how heavy-handed the Government will seek to be in the first instance and how its approach may differ from that of Australia’s key economic partners.

When we consider the state of flux that the world sits in currently when it comes to our understanding of traditional legal and commercial principles in an AI-driven world, it is not surprising that the Government has not shown it’s hand here – it may not even have one – so it is very possible that we may see some delay while the Government watches with interest the experience of the EU and other jurisdictions on regulating how AI interacts with our lives.

One key question however, is how a “risk-based management approach” to regulating AI will be defined and implemented. The Government acknowledges that while the consensus among submissions to the Paper was that a risk-based approach is appropriate, there are limitations to such an approach. In particular, the Government notes that it was identified that such an approach is unlikely to adequately consider and address unpredictable risks, particularly for general purpose AI models that could be used for low-risk AI use cases, so describing and defining the boundaries and parameters of these categories of risks will be one of the key areas of focus in developing this AI regulatory regime.

What constitutes high-risk AI?

While the Government’s response refers to definitions of “high-risk” used to define AI in certain key jurisdictions, it does not purport to provide its own definition, and instead comments that further work needs to be undertaken in order to properly define high-risk AI in the Australian context.

However, one may reasonably predict that any Australian legal definition will borrow from definitions given in the recently passed EU AI Act, which defines “risk” by reference to the combination of the probability and severity of harm stemming from the use of AI, and defines high-risk AI to include:

  • AI designed to be used as a safety component of a product;
  • some systems used in the education, employment and recruitment areas; and
  • risk assessment and pricing algorithms used in the health and life insurance context.

What about the “foundation model”?

The Government briefly addressed foundation or “frontier” models in its response. This was fairly predictable, given the high-profile debate around if/how foundation models should be regulated under the EU AI Act, with the Government noting that these models should be given specific consideration, due to their scale and potential to power all manner of use cases.

While we can predict with reasonable probability that the Government will opt to regulate foundation models in a similar way to the EU (i.e. via transparency, registration and governance requirements), it remains to be seen as to how (if at all) the Government will attempt to further regulate these models, taking into account the raft of issues they can present in respect of enabling fraud and cybercrime, propagating bias and discrimination, infringing intellectual property rights and breaching privacy.

 

What can you do (for now)?

While Australia’s path forward is uncertain, developers, deployers and users are on the road to regulation in some form. But how best can an organisation prepare now? They should ensure that their internal processes, procedures and broader compliance environments allow for the safe, secure and responsible AI development and deployment, and can get ahead of the curve by following best practices in respect of AI implementation, use and governance, including by looking to the EU AI Act, regulation from other parts of the world and newly released standards for AI implementation and governance (for example, those from NIST and ISO).

Of course, taking such measures now may necessitate some duplication of effort once regulation is introduced, but even absent regulation, organisations should be implementing organisational measures to ensure “AI hygiene” and minimise the risk that derives from its use. To that end, organisations may consider some of our recommendations in one of our other articles.

 

How can DLA Piper help?

DLA Piper is well-placed to assist any organisation that wishes to understand the road ahead toward AI regulation in Australia, the potential opportunities and risks that the adoption of AI may present or the preparatory measures they may take to de-risk their development and/or use of AI. We bring together a global depth of capability on AI with an innovative product offering in a way no other firm can. Our global, cross-functional team of lawyers, data scientists, programmers, coders and policymakers deliver technical solutions on AI adoption, procurement, deployment, risk mitigation and monitoring.

Further, because AI is not always industry-agnostic, our team includes sector-focused lawyers with extensive experience in highly regulated industries. We’re helping industry leaders and global brand names across the technology, life sciences, healthcare, insurance and transportation sectors stay ahead of the curve on AI.

Contacts