AI in Australia: What you need to do to prepare
It’s no secret that artificial intelligence (AI) hysteria has reached a fever pitch. Just recently as part of the 2023-24 Budget, the Commonwealth Government doubled-down on AI by committing AUD41.2 million to support AI deployment in the national economy. Yet despite the fervour for ChatGPT to rule the world, the regulatory framework for AI remains murky and continues to bewilder the industry and the public as much as the technology itself thrills. Without clear guardrails, AI has not engendered public confidence and adoption in Australia has remained low.
To understand and address the regulatory uncertainty, the Australian Government, through the Department of Industry Science and Resources, released a Discussion Paper on 1 June 2023, which invites industry feedback to determine how AI should be regulated in Australia. Here’s what you need to know about how AI is regulated today, what the regulatory path ahead might look like, and how your organisation can prepare for the dawning of the age of AI.
The Australian AI Landscape
To date, the only AI-specific governance framework in Australia is the AI Ethics Framework introduced in 2019 (Framework). The Framework is a set of 8 voluntary principles that aim to ensure development and use of safe, reliable, and secure AI technology.
AI in Australia is otherwise regulated, at least in part, by existing law. For example:
- the Privacy Act 1988 (Cth) (Privacy Act) will govern how personal information is used and disclosed in connection with the training of an AI algorithms;
- the Australian Consumer Law will apply to AI-driven technologies implemented in a consumer-facing context;
- Australia’s anti-discrimination laws can provide individuals with remedy where they have been a victim of a discriminatory outcome resulting from an AI-driven process; and
- copyright and intellectual property laws in Australia can help to regulate intellectual property rights related to AI, especially in the context of generative AI tools such as ChatGPT, which can use copyright-protected materials as input data, but can also produce output that could infringe intellectual property rights.
Do we need AI-specific regulation?
While the Framework and existing legislation are helpful and can help address some risks of AI, it wasn’t created with AI in mind and therefore, it cannot fully reflect the uniqueness, breadth of capability or risk profile of AI. For example, there are no enforceable regulatory guardrails restricting developers from creating AI technology that could cause harm (for example, harm caused by an AI application that uses or discloses an individual’s personal information without authorisation). The current regulatory framework can only address harm after it has occurred.
There are AI-specific gaps in regulatory cover. For example, a decision-making AI algorithm trained on specific data may produce outcomes that are biased or discriminatory, and a legislated individual right to request information about how an AI-driven decision has been made could provide a way to address potential bias or discrimination.
A sector-based approach?
The Discussion Paper suggests a host of ways to potentially regulate AI. It’s clear that AI regulation is coming; the broad question is whether Australia should adopt a centralised or decentralised approach to regulating AI.
The centralised approach involves introducing an AI-specific set of regulations. This is the approach adopted by the EU, which recently passed landmark legislation (the AI Act) that regulates AI application development by assessing the risk profile of the relevant application of the technology. A weakness in this approach is that, due to the rapidly-evolving nature of the technology, keeping the regulation current and fit-for-purpose could be difficult. However, we think it is possible for regulation to address issues on a principled basis without risking immediate redundancy.
The decentralised approach is a sector-based approach where specific sector regulators regulate AI based on the risks particular to that sector. Because the way in which AI is used in the banking and insurance sector, for example, can be quite different to the way in which it is used in the consumer goods and retail sectors, we think it deserves different treatment in different sectors, although we grant there is risk of complexity and missing potential synergies across regulated areas.
How should you brace your business for regulation?
Even before AI-specific regulation is passed, organisations should be ensuring ‘AI hygiene’. Think about:
- Internal AI audit – identify any AI-driven products used in the business, both internally and by customers, hosted internally or provided on an outsourced or as-a-Service basis.
- Understand data flows – develop an understanding of how data flows through the organisation, and whether any of that data touches the AI products used by the business, and how. This will assist in understanding the extent to which any guardrails or information barriers may need to be implemented at an organisational level, so as to mitigate, for example, privacy risks.
- Understand the existing landscape – understand the existing regulatory landscape as it applies to your business.
- Policy review – use the information from the above investigations to review existing policies and procedures (such as information security policies and practices, privacy policies and relevant customer-facing policies) to ensure that they are fit-for-purpose.
- Implement best practice – NIST and ISO have drafted international standards for AI development, which organisations could adopt to ensure that internal processes related to AI reflect best practice.
- Contract review – consider whether customer contracts (of AI developers, or of user organisations whose AI use affects or interacts with its end customers) appropriately deal with matters such as privacy, liability, indemnity and intellectual property rights in a way that works with the relevant AI product.
How can DLA Piper help?
DLA Piper is well-placed to assist any organisation who wishes to understand their existing legal and ethical obligations in relation to AI, or the preparatory measures they may take to de-risk their development and/or use of AI. Our global, cross-functional team of lawyers, data scientists, programmers, coders and policymakers deliver technical solutions on AI adoption, procurement, deployment, risk mitigation and monitoring.
Because AI is not always industry-agnostic, our team includes sector-focused lawyers with extensive experience in highly regulated industries. We’re helping industry leaders and global brand names across the technology, life sciences, healthcare, financial services, insurance and transportation sectors stay ahead of the curve on AI.