|

Add a bookmark to get started

8 de mayo de 202319 minute read

Using policy to protect your organization from generative AI risks

As generative artificial intelligence (AI) gains mainstream popularity, it is crucial for organizations to recognize its impact on the way work is produced. Virtually every organization, regardless of size, has at least one individual or department interested in using generative AI for business purposes.

The breathtaking, bewildering, and frightening content generated by AI has prompted businesses and employees alike to consider how AI could be integrated into their work to reduce cost, increase efficiency, and cover skill and information gaps. While the potential benefits of leveraging AI in workflows are profound, the unconsidered use of this technology also introduces significant organizational risk. The decision of whether, when, and how to adopt AI should not be left to individual workers to figure out on their own; rather, it is essential for companies to adopt a principled, policy-driven approach to govern AI, and to do so sooner rather than later.

The rise of Generative AI

Generative AI has captured the public's attention due to recent advances in artificial intelligence technologies. However, it is crucial to acknowledge that machine learning has long been adopted in many various workplace applications, such as data analysis, customer service, recruitment and hiring, fraud detection, translation and language services, project management, and even editing and drafting tools.

While all AI systems can be considered "generative" to some extent, this article will primarily focus on a category known as "Generative AI." This rapidly evolving field involves AI systems capable of creating seemingly new, useful, or realistic content. Generative AI algorithms produce text, images, video, audio, code, or synthetic data based on training data and user inputs, often through human prompts or interactions. Well-known examples include ChatGPT, DALL-E, BERT, Bing, and Midjourney, all of which launched into the public consciousness in 2023.

On some technical level, these systems simply represent tools that employ generative models such as large language models, self-reinforcement learning, and other techniques to generate new data based on their training dataset, a different process but still just a tool. However, from an organizational perspective, generative AI systems represent a potentially disruptive workplace technology. As such, a careful evaluation of both their risks and benefits is essential.

AI offers many undoubtable benefits as a tool, particularly in regards to efficiency and creativity (though there are still debates about how creative Generative AI really is). Generative AI can streamline repetitive tasks, enabling workers to concentrate on more complex and creative aspects of their jobs. This leads to improved productivity and more effective use of human resources. The implications extend across numerous industries. Goldman Sachs’ recent report on Generative AI’s impact on the global economy estimates that as many as 300 million full-time jobs around the would could be automated in some way by AI, while at the same time raising global GDP by 7 percent, all over a 10-year period.

By automating specific tasks, generative AI can reduce content generation costs, allowing teams to adhere to budgets and more efficiently allocate resources towards growth opportunities; as a bonus, generative AI can provide valuable insights and recommendations based on large datasets, helping make more informed decisions and optimize strategies.

Generative AI can provide inspiration and novel ideas that workers can build upon, helping them to overcome creative blocks and find innovative solutions, whether that is indirectly by freeing up worker time for more creative processes, or directly by allowing workers to quickly explore ideas (using commercial tools such as Midjourney or ChatGPT). In one example of this, video game publisher Ubisoft announced a tool that helps game writers quickly generate first drafts of "barks"—short phrases or sounds made by environments or virtual characters in games. Since large games often contain thousands of barks, drafting them from scratch is time-consuming and monotonous for writers who would much rather focus on the main characters, or add value to an existing draft.

Navigating the AI intellectual property minefield

The legal status of AI-generated works remains untested and uncertain, raising significant questions about the applicability of existing intellectual property (IP) laws. This ambiguity presents challenges for organizations seeking to protect their interests in AI-generated content, making it crucial to proactively manage IP risks, including:

  • Ownership of AI-generated works: It is unclear who owns the content that generative AI platforms create, whether it is the AI platform provider, the user who provides the prompts, or the original creators of the data used to train the model. In fact, it is unclear whether such AI-generated works are even intellectual property that can be owned and protected at all. For example, in the US, the Copyright Office issued guidance in March 2023 to clarify when artistic works created with the help of AI are copyright-eligible. The office said that copyright protection depends on whether AI's contributions are the result of mechanical reproduction, such as in response to text prompts, or if they reflect the author's "own original mental conception". The office also said that most popular AI systems, such as Midjourney, ChatGPT and DALL-E 2, do not create copyrightable work, because the limited prompts do not let users exercise ultimate creative control over how such systems interpret prompts and generate material; instead,they function more like "instructions to a commissioned artist". However, creative modifications and arrangements of AI-created work can still be copyrighted, and the office said its policy "does not mean that technological tools cannot be part of the creative process". The office also said that copyright applicants must disclose when their work includes AI-created material, and that previously filed applications that do not disclose AI's role must be corrected. (For more on the copyright office guidelines, see our colleagues’ article here)
  • Infringement, illegal or unlicensed content in training data: Generative AI platforms may use unlicensed or illegal content in their training data, such as personal data, hate speech, or pirated material, which may raise ethical and legal concerns. Some jurisdictions have tried to develop a collaborative approach in addressing some of these concerns. For example, the UK Intellectual Property Office announced in March 2023 that it will create a code of practice for generative AI companies to facilitate their access to copyrighted material, and follow up with specific legislation if a satisfactory agreement cannot be reached between AI firms and those in creative sectors. The office said that the code of practice will provide guidance to support AI firms to access copyrighted work as an input to their models, while ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work: apparently, any AI firm that commits to the code can expect to have a reasonable licence offered by a rights holder in return. The office has also said that it will convene a group of AI firms and rights holders to identify barriers faced by users of data mining techniques. (For more on the UK IPO white paper, see our colleagues’ blog post here)
  • Patentability of AI-generated inventions: Dr Stephen Thaler, a computer scientist and inventor of an AI system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience), filed patent applications in several countries for two inventions that he claimed were devised by DABUS without human intervention, naming DABUS as the inventor and himself as the assignee of the rights. The patent offices of the UK, US and EU rejected the applications (and courts have upheld these rejections) on the grounds that an inventor must be a natural person, not a machine, according to their respective laws and treaties. They also argued that naming a machine as an inventor would undermine the social contract between inventors and society, and create legal uncertainty over the ownership and enforcement of patents. Dr Thaler has petitioned the US Supreme Court to review his case. (DLA Piper has several articles about this case.)

Protecting confidential information and managing regulatory compliance risk

In addition to intellectual property concerns, using generative AI comes with a host of other risks:

  • Privacy: AI-generated content might unintentionally incorporate personal information or violate privacy regulations, exposing organizations to legal and reputational risks. Ensuring AI systems adhere to data protection and privacy standards is crucial.
  • Cybersecurity: The use of AI tools may introduce new attack vectors for hackers, requiring organizations to implement robust cybersecurity measures and ensure that their AI systems (or any external AI tools used) are secure.
  • Likeness rights: AI-generated content may inadvertently infringe on an individual's right of publicity or create unauthorized representations of real people, places or things, potentially leading to legal disputes.
  • Misinformation: Generative AI can produce convincing but inaccurate content, raising concerns about the spread of misinformation, liability and potential harm to an organization's credibility; inaccurate AI-generated content could lead to false advertising claims or breach consumer protection laws, putting organizations at risk of regulatory penalties or litigation.
  • Other Ethical dilemmas: Generative AI can inadvertently perpetuate biases or create content that negatively impacts workplace morale, leading to ethical concerns and potential harm to an organization's culture and reputation.

Platforms like ChatGPT collect user inputs to further train their models and do not necessarily protect the confidentiality of those inputs. In some instances, employees may input confidential business information, including protected IP, in order to generate answers relevant for their work. However, by doing so, employees compromise the confidentiality of this information. This may create risk of litigation to protect confidential information from competitors, and address regulatory enforcement for unintended violations of record-keeping requirements for public companies.

This risk is not speculative. Several large US companies have banned employee use of ChatGPT following disclosure that employee use was compromising confidential information or was otherwise risking violating regulatory record-keeping and related requirements by using the tool.

In addition, governments are jumping in on the action. In June, 2022 the Federal Government of Canada tabled a new Artificial Intelligence and Data Act (“AIDA”) as part of Bill C-27, the Digital Charter Implementation Act, 2022. AIDA regulated “high impact AI Systems”, which are yet to be defined. Without a full definition, it is uncertain exactly which, if any, Generative AI systems will be captured by the new legislation, but it is likely that some will be. For more on AIDA, you can read our articles here and here.

Under AIDA, any person who designs, develops, makes available for use, or manages the operations of any high risk AI System will be responsible for that system, and AIDA will impose significant obligations on any organization using a high impact AI System, including assessing whether a system is high-impact and, if so, establishing measures to identify, assess and mitigate risks of harm and bias, including on-going monitoring requirements, data anonymization, and record-keeping. AIDA will be enforced using three mechanisms: (1) administrative monetary penalties; (2) prosecution of regulatory offences; and (3) criminal charges. Monetary penalties could be as high as 3 percent of global revenue for contraventions of AIDA, and 5 percent of global revenue for commission of offences under the Act. While it is not yet clear which governmental entity will be responsible for regulatory enforcement, criminal offence enforcement will be administered by the Public Prosecution Service of Canada.

The Regulations under AIDA are in a consultation phase and will not be in force until 2025 at the earliest, with enforcement likely commencing in 2026 or 2027. However, the scope of AIDA is significant, the burdens onerous, and the penalties for contravention severe. Given the extant risks identified in this article, and the rapid pace of AI deployment, organizations should not wait until legislation comes into force to prepare AI policies, but in the development of those policies organizations should pay close attention to legislative developments.

For instance, if an AI System incorporates some kind of bias or discrimination and is then used by an organization to make a decision that impacts an individual (such as insurability, granting an employee benefit, banning or taking action against a user, or the like), there is both risk of private litigation and regulatory risk under AIDA even if the organization did not design, develop or manage that system. That risk would be unknown if not properly vetted under a corporate policy, thereby incorporating that system’s inherent risks into the organization’s.

Contractual considerations for generative AI

In many cases, contract third parties and workforces may not directly address the use of generative AI unless it was specifically considered at the time of drafting, and trying to figure out how generative AI is to be treated is a little like fitting a square peg into a round hole. For instance, contracts often include clauses prohibiting the use of outside intellectual property (that is, third-party IP or previously generated IP) in new work; however, these clauses might not prevent a contractor from drawing inspiration from or using snippets of an AI-generated work, and it is uncertain whether workers would interpret such clauses as applicable to their potential uses of generative AI. Similarly, company policies on work product, non-plagiarism, or the use of outside works may not explicitly address generative or contributory technologies like generative AI.

To fully exploit deliverables and work product, companies should specifically address the use of generative AI. For instance, a restrictive approach to using generative AI could be articulated in the following clause:

Prohibition of Generative AI use. The Contractor will not, without the prior consent of the Company in writing, utilize any generative artificial intelligence software, tools, or technologies, including, natural language processing, deep learning algorithms, or machine learning models (“Generative AI”) directly or indirectly in the performance of the Services or the creation of any Work. The Contractor represents and warrants that all Work will be the result of the Contractor's independent, original efforts without any unapproved Generative AI assistance, and will not incorporate or be based upon any output or contribution generated, in whole or in part, by Generative AI except strictly in accordance with Company policy.

Alternatively, with a well-developed policy, a simple clause that stated “The Contractor will not utilize any generative artificial intelligence systems except in accordance with Company policy” may be sufficient.

However, there is no one-size-fits-all solution—it must be tailored to the specific needs of each company. Striking a balance between prohibiting the use of generative AI for content creation and allowing AI-assisted tools that enhance productivity and efficiency is essential. Does an organization deal with copywriters or artists? Many commercial photo, video, and text editing tools have AI-assistance features (e.g., the magic eraser in Photoshop or predictive text in word processors). Contracts and policies should clearly differentiate between prohibited generative AI usage and permitted AI-assisted tool usage, allowing workers to benefit from the latter without violating their agreements. This intersection of policy, contract, and practicality requires careful evaluation to ensure a coherent approach.

Policy approaches to generative AI

If an organization cannot incorporate all necessary AI restrictions into their contracts, well-defined policies should bridge the gap. Adopting a top-down, comprehensive approach will be crucial in contextualizing generative AI use within the organization.

When developing policies for AI, a company should first assess whether employing generative AI is necessary and advantageous for specific tasks or processes, considering potential benefits and weighing them against potential risks and downsides. As part of this assessment, the organization should identify approved or "no-go" use cases for generative AI, gathering input from relevant stakeholders and assigning appropriate responsibilities and roles. And it may determine that no AI usage is appropriate. But if it does, then in collaboration with internal and external legal advisors, it should develop a policy tailored to its specific business context, which could encompass the following topics:

  • Purpose and scope: Clearly define the purpose and scope of the policy, specifying its applicability to different departments, roles, and responsibilities within the organization.
  • AI ethics principles: Where AI will be used, establish a set of core principles to guide the ethical use of generative AI within the company (such as fairness, transparency, accountability, and respect for privacy, etc., or other alignments with corporate mission, values and goals). As part of this, an organization may find it helpful to specifically address measures to minimize bias and promote fairness in AI algorithms and outputs, with ongoing monitoring, evaluation, and improvement of these systems.
  • Data management, privacy and security: Outline data handling practices, including data collection, storage, sharing, and disposal, ensuring compliance with relevant data protection laws and regulations, hand in hand with privacy and confidentiality policies. Address the protection of personal and sensitive information, as well as the implementation of robust security measures to safeguard against data breaches, unauthorized access, and misuse of AI-generated content.
  • Transparency and explainability: Encourage transparency in AI development processes and strive for explainability in AI-generated content and decision-making, as this is an increasing focus of regulators.
  • Accountability and responsibility: Define the roles and responsibilities of different stakeholders, including AI developers, users, and decision-makers, ensuring a clear chain of accountability.
  • Intellectual property: Considering its existing IP stance, evaluate the IP risks of using generative AI (described above) against the benefits, and set out clear guidelines for how teams are required to interact with the AI in order to ensure that those risks are mitigated.
  • Human oversight: Perhaps relating to the former topic, establish guidelines for human oversight of AI systems, promoting a balance between automation and human intervention to mitigate potential risks and unintended consequences (and, as a side benefit, to help workers feel like they are part of the solution not being replaced).
  • Compliance and legal considerations: Ensure compliance with relevant industry regulations, other company policies, and other legal obligations related to AI deployment.
  • Training and awareness: Provide training and resources to workers on responsible AI use, ensuring they are aware of company policies, ethical considerations, and potential risks.
  • Third-party relationships: Develop guidelines for managing relationships with third-party AI vendors and partners, including due diligence, risk assessment, and ongoing monitoring.
  • Audit and monitoring: Implement regular audits and monitoring of AI systems to assess their performance, adherence to ethical guidelines, and effectiveness in meeting business objectives. Ensure that workers are aware that they should report violations of the policy, and have a clear chain of command.
  • Incident response and remediation: Establish procedures for addressing AI-related incidents, including reporting, investigation, and remediation measures to minimize harm and prevent recurrence. Address contravention of the policy.
  • Continuous improvement: Foster a culture of continuous improvement, encouraging feedback and learning from AI deployments to refine and enhance the policy over time.

Given the rapidly-evolving generative AI landscape, organizations must develop policies and procedures that are flexible and adaptable. Involving multiple stakeholders in decision-making and regularly revisiting and updating these policies will allow organizations to stay ahead of the curve and effectively manage AI usage. We are still in the infancy of the generative AI revolution, and businesses must be prepared to adapt their contracts, policies and workplace relationships accordingly.

 

Note: the authors ran ideas through ChatGPT in writing this article‎.

Print