|

Add a bookmark to get started

28 de junio de 202317 minute read

AI in Australia

The regulatory road ahead

On 1 June 2023, the Australian Government, through the Department of Industry, Science and Resources released a Discussion Paper (Paper) that provides an overview of global approaches to the regulation of artificial intelligence (AI), and invites contribution from industry in assessing whether the existing regulatory environment adequately caters for the rapid emergence of AI-driven technologies, or alternatively, whether enhancement to regulation is required to ensure the safe and responsible development and use of AI.

Specifically, the Paper seeks feedback on a series of questions designed to assist Government in understanding industry views as to the risks of AI, the extent to which those risks are not currently being addressed by the existing regulatory environment, and potential regulatory models. Further, the paper seeks a view as to whether certain applications of high-risk AI should be banned altogether.

 

The Australian AI Landscape

The release of the Paper (combined with a recent Australian Government commitment of AUD41.2 million to support the responsible deployment of AI in the national economy for FY 2023-2024) signals an increased focus by Government on growing AI as a key component of Australia’s broader technology strategy, and provides a recognition that regulation must also develop alongside innovation, particularly given that AI regulation is currently front and centre for many other developed nations, including Australia’s key trading partners.

Despite this fertile landscape for the adoption of AI however, the Paper acknowledges that while AI is gaining momentum and adoption globally, with new applications for the technology arising frequently in a range of sectors, the rates of adoption of AI in Australia are relatively low, primarily due to a perceived lack of public and industry trust and confidence in the technology itself. The proliferation of large language models and multimodal foundation models (such as those that power applications like ChatGPT and DALL-E) has been instrumental in shaping the public perception of AI and has brought to light some of the inherent risks and complications of the technology. Accordingly, the Government’s view is that in order to increase rates of adoption and build public trust and confidence in AI, the regulatory frameworks within which it is developed and used need to be fit-for-purpose; addressing the inherent risks posed by the technology, while being flexible enough to facilitate innovation.

 

The current regulatory approach

To date, AI-specific governance responses in Australia have been voluntary, in the form of an AI Ethics Framework introduced in 2019 (Framework). The Framework consists of 8 voluntary principles for the development and use of AI, which, if adopted and complied with by an organisation, would help to ensure that the AI developed by that organisation is safe, secure, and reliable.  The Framework is also consistent with the OECD’s AI Principles, to which Australia is a signatory. 

While there is no enforceable AI-specific regulation in Australia, various existing Australian laws can be applied to address some of the risks posed by AI technologies in terms of its design, development and use. For example:

  • the Privacy Act 1988 (Cth) (Privacy Act) will govern the way in which personal information is used and disclosed in connection with the training of AI algorithms. The Office of the Australian Information Commissioner recently made a determination that Clearview AI breached the Privacy Act by using a data scraping tool to procure biometric information of individuals from the internet, which information was then disclosed via an AI-driven facial recognition tool – Clearview AI was also ordered to cease collecting such information and to destroy any such information that it held;
  • the Australian Consumer Law under the Competition and Consumer Act 2010 (Cth) will apply to AI-driven technologies implemented in a consumer-facing context. To date, these laws have been useful in policing misleading and deceptive conduct where automated-decision making (ADM) AI applications have been used;
  • Australia’s anti-discrimination laws can provide individuals with remedy where they have been a victim of a discriminatory outcome resulting from an AI-driven process (provided that such discrimination was in relation to an attribute protected under the relevant laws); and
  • copyright and intellectual property laws in Australia can help to regulate intellectual property rights related to AI, especially in the context of generative AI tools such as ChatGPT, which can use copyright-protected materials as input data, but can also produce output that could infringe intellectual property rights.

Accordingly, the Paper seeks industry views as to the inherent risks posed by AI-driven technologies that are not addressed (or cannot be addressed) by existing regulation, and whether further regulation is required in order to ensure that Australian law is fit-for-purpose.

 

The need for specific contemplation of AI in regulation

The central issue with the current regulatory framework in Australia, is that it does not address AI specifically and therefore, does not provide for any regulation that caters to the uniqueness, breadth of capability and inherent risk profile of the technology.

For example, there are no enforceable regulatory guardrails in respect of how AI is developed and deployed that would hold developers accountable for the impact of their technology and restrict its development to the extent that it has the potential to cause harm (for example, harm caused by the unauthorised use or disclosure of an individual’s personal information). This may therefore cause potentially-harmful AI to be developed, which, to the extent that any such harm is suffered, the current regulatory framework can only address in retrospect (that is, after the harm has been suffered).

In addition, rights and protections offered to individuals through regulation are not necessarily fit-for-purpose and therefore, may not be helpful in addressing some of the inherent risks of the technology and could hinder individuals’ ability to access remedy where they have been harmed in their use of AI.

For example, in the context of ADM, whereby an AI algorithm has been trained on specific data sets in order to develop the capability to make decisions (for example, an ADM capability in a piece of banking software that would decide whether or not a person is eligible to be granted a home loan), depending on the quality and breadth of training data used, the algorithm may produce outcomes that are biased against or discriminatory toward individuals. However, absent a legislated individual right to request information about how an AI-driven decision has been made via an ADM algorithm (as well as a corresponding requirement that developers of AI products provide such transparency and explainability), there is no real ability to detect such potential bias or discrimination and provide redress to an affected individual.

 

The regulatory road ahead

The Paper proposes a diverse set of possible approaches to the regulation of AI, ranging from voluntary schemes to strict and enforceable regulation, including the following:

  • AI-specific regulations – whereby Government would introduce enforceable AI-specific laws, especially in high-risk settings.
  • Industry self-regulation – whereby a specific industry would adopt a code of conduct through a voluntary scheme; the obvious weakness of which is its voluntary nature and inability to compel AI developers to comply.
  • Regulatory collaboration – whereby sector-specific regulators, such as the Australian Competition and Consumer Commission, the Australian Communications and Media Authority, Office of the Australian Information Commissioner and e-Safety Commissioner would collaborate with Government to review the efficacy of existing regulations and engage technical experts to determine the extent of the need for further regulation.
  • Governance – whereby specific bodies and platforms would be established to support AI governance outcomes, policies and initiatives.
  • Technical standards – whereby Government would choose to make currently-voluntary standards for the development of emerging technologies mandatory.
  • Assurance infrastructure – whereby testing regimes for the development of AI-driven technologies would be mandated.
  • Consumer transparency – whereby requirements would be developed to require that AI-driven technologies undergo impact assessments, in order to provide transparency to the public about the potential impact of the technology and how it works.
  • Bans, prohibitions, and moratoriums – whereby Government prohibits certain activities by law (for example, AI applications that are deemed to be high risk).
  • By-design considerations – whereby Government would mandate that AI development must consider the privacy and safety of individuals.

Importantly, the Paper also acknowledges that Australia will need to harmonise its adopted regulatory framework with those adopted globally or by its major trading partners, including in respect of catering to any extraterritorial application of foreign AI laws.

 

A sector-based approach?

The broader question that needs to be answered in respect of how AI is to be regulated in Australia, is whether the approach should be centralised (as per the EU approach of specifically legislating AI) or decentralised (as per the proposed approach in the UK, which proposes to leave AI regulation to sector-specific regulators who will be able to regulate AI based on the specific risks that it poses to the relevant sector).

The Decentralised Approach

The key advantage of a decentralised, sector-based approach is that it allows specific sectors to regulate AI based on the unique risks that it presents to that sector, in connection with the manner in which the technology is used – the way in which AI is used in the banking and insurance sector, for example, is generally different to the way in which it is used in the consumer goods and retail sectors and accordingly, may deserve separate and more-targeted regulatory treatment.

The general limitation of this approach, however, would be the need to harmonise the treatment of general purpose AI under regulation, where that AI cuts-across a number of sectors or regulated areas. In addition, one of the procedural burdens of this approach is the need to conduct a gap analysis in order to identify any gaps or shortcomings in the regulatory treatment between sectors/regulated areas.

An instance of a decentralised approach to AI regulation is already being seen in Australia in the form of the review of the Australian Privacy regime undertaken by the Australian Attorney General. In its raft of recommendations, the Attorney General recommends that the Privacy Act be amended to specifically contemplate privacy protections in the context of AI-driven ADM algorithms, including:

  • individual rights to request information about how automated decisions are made; and
  • obligations on regulated entities to disclose in their privacy policies, whether personal information is used in any sort of ADM process that will have a legal or significant impact on the rights of an individual.

However, the inherent limitation of this approach is that it will only apply to privacy laws, and not generally (and would not necessarily be helpful in addressing issues such as discrimination or bias).

The Centralised/Risk-Based Approach

On the other hand, the centralised approach would involve the introduction of an AI-specific set of regulations; this is the approach adopted by the EU, which, has recently passed a landmark piece of legislation that seeks to regulate the development of AI-driven applications based on an assessment of the inherent risk profile of the relevant technology. The AI Act recognises a need for the regulation of certain AI applications based on the potential for those applications to cause harm. For example, it proposes a complete legislative prohibition on real-time biometric identification technology in public spaces and limits the permissible ex post facto use of biometric identification systems to where it is used for the prosecution of serious crimes (and only after judicial authorisation), due to the potential for that type of AI to be used for nefarious purposes.

While this risk-based approach would enable governments and regulators to be proactive and preventative in its approach to the regulation of AI, it does create some complexity. Firstly, regulation introduced under this approach would be general purpose and would only categorise AI based on its risk-level and not its specific use cases. This could mean that the regulation is quite rigid (in an effort to be one size fits all) and does not speak to the risk profiles or use cases of particular sectors, which could stifle innovation if the legislation has no flex. Further, a general approach does raise the risk of regulatory inconsistency as between the AI-specific laws and other forms of regulation that touch AI technologies, which could create uncertainty in the market as to best practice and the extent of compliance requirements.

Whatever the chosen approach, it is clear from the Paper that any chosen framework(s) must be implemented in a way that facilitates, rather than stifles, innovation and accordingly, it would appear that a clear and targeted sector-based approach would be best to achieve this end.

 

What needs to be done?

In addition to introducing regulatory change based on the adopted regulatory model(s), it will also be incumbent on regulators and Government to provide communication and guidance to industry both on the developer and user side, to ensure that the relevant laws that relate to AI (regardless of the regulatory approach that is taken), how and in what contexts they apply, and how to comply with them, are understood clearly by the business community.

For example, industry will need to understand the relevant requirements of:

  • the laws pertaining to intellectual property infringement and the boundaries of those laws. This will be essential to understand for any organisation using copyright protected material as input into a generative AI application;
  • any regulations adopted relating to transparency and explainability of AI, including in relation to how the relevant AI application works and the data that is used to train it, as well as transparency around decision making by ADM systems;
  • any prescribed processes that must be undertaken to identify or address biased, discriminatory or otherwise unintended outcomes, including for example, mandatory testing requirements and robust quality assurance procedures;
  • relevant privacy and data security laws. Privacy laws will apply where personal information is used in a training data set for an AI product, and it is possible that any adopted AI regulations will include requirements on AI developers to implement technical and organisational measures to ensure data security; and
  • relevant consumer laws, including in respect of a developer’s product liability obligations, including under the Consumer Guarantees under the Australian Consumer Law.

 

What can you do?

It is clear that we are well and truly on the road to AI regulation in Australia, which is an important strategic move by Government to ensure that AI is developed safely and securely, thereby helping to mend the cracks in public trust and confidence in AI. But how best does an organisation prepare for this new area of regulation?

While it will be important for Government to do its part in educating industry as to any regulatory framework it adopts (as noted in the above section), organisations can get ahead of the curve by, broadly speaking, ensuring that their internal processes, procedures and broader compliance environments allow for the safe, secure and lawful development and use (as the case may be) of AI technologies. Of course, taking such measures now may necessitate some duplication of effort once actual regulation is introduced, but the reality is that AI is here, now; accordingly, even absent regulation, organisations should be implementing organisational measures to ensure AI hygiene” and minimise the risk that derives from its use.

To that end, organisations may, depending on their nature and the sector(s) in which they operate, consider the following:

  • Internal audit – audit software use throughout the organisation and identify any AI-driven products used in the business, both internally and by customers, hosted internally or provided on an outsourced or as-a-Service basis. This will assist in framing the review of that organisations’ policies and procedures as per the below.
  • Understand data flows – develop a sophisticated understanding of how data flows through the organisation, and whether any of that data touches the AI products used by the business, and in what way. This will assist in understanding the extent to which any guardrails or information barriers may need to be implemented at an organisational level, so as to mitigate, for example, privacy risks where personal information held by the organisation interacts with AI.
  • Understand the existing landscape – seek to understand the existing regulatory landscape and its rights and obligations thereunder.
  • Policy and procedure review – review existing policies and procedures (such as information security policies and practices, privacy policies and relevant customer-facing policies) to ensure that they are fit-for-purpose. For AI developers, this may mean updating their customer-facing privacy policies, if and to the extent that personal information is used to train the underlying algorithm that powers their products. For customers, this may involve modifying internal processes to cater for the manner in which that organisation uses AI – for example, an organisation may wish to ensure that its information security policy contemplates that no client or commercial information is input into any sort of AI-driven software product used by its business.
  • Implement best practice – more recently, the National Institute of Standards and Technology (NIST) and standards body ISO have drafted international standards for the deployment of AI. Organisations could choose to adopt these standards in order to ensure that their internal processes related to AI reflect best practice.
  • Contract review – consider whether their customer contracts (of AI developers, or of user organisations whose AI use affects or interacts with its end customers) appropriately deal with matters such as privacy, liability, indemnity and intellectual property rights in a manner that is commensurate with the nature of the AI product in question. Further, organisations using AI will want to ensure that their supplier contracts with AI developers require the AI developer to provide certain warranties in respect of the AI products they provide. For example, in the context of generative AI, warranties that a customer’s use of the relevant tool will not infringe third party intellectual property rights, would be appropriate.

 

How can DLA Piper help?

Organisations are able to submit their feedback on the Paper until 26 July 2023. DLA Piper is well-placed to assist any organisation who wishes to make a submission on the Paper, or who wishes to understand, more broadly, the potential opportunities and risks that the adoption of AI may present to their organisation. Further, to the extent that any organisations would benefit from guidance as to their existing legal and ethical obligations in relation to AI, or the preparatory measures they may take to de-risk their development and/or use of AI (including those measures suggested above), we would be happy to discuss these with you.

DLA Piper brings together a global depth of capability on AI with an innovative product offering in a way no other firm can. Our global, cross-functional team of lawyers, data scientists, programmers, coders and policymakers deliver technical solutions on AI adoption, procurement, deployment, risk mitigation and monitoring. Further, because AI is not always industry-agnostic, our team includes sector-focused lawyers with extensive experience in highly regulated industries. We’re helping industry leaders and global brand names across the technology, life sciences, healthcare, insurance and transportation sectors stay ahead of the curve on AI.

Print