|

Add a bookmark to get started

27 de abril de 20239 minute read

Before creating or acquiring a technology solution that is generated by AI, consider your contract terms

As businesses consider the risks and benefits of using generative AI tools, approaches to these risks and benefits will differ depending on who you are.  Perhaps you are a vendor of a technology solution that was developed, in whole or in part, with the help of a generative AI tool; perhaps you are the customer that is purchasing or acquiring rights to that technology solution. 

In this vendor/customer context, each party’s needs and the issues each may face are different. Here, we show how thoughtful contract drafting and negotiation may help both vendors and their customers address these differing needs and issues. You may also be interested in reading our related article, “Generative AI for Essential Corporate Functions: Use Cases & Legal Considerations,” published by Bloomberg on April 27, 2023.

The vendor’s perspective

While the use of generative AI tools may help to meet deadlines, accelerate time to market and reduce costs and expenses, those benefits come at a potential cost – not the least of which are the risk of losing trade secret protection for information used as inputs to ambiguous rights over the ownership of the output, and possible third-party infringement risk when output is generated from or incorporates third-party content.  In addressing these risks from a vendor’s perspective, it may be helpful to look to open source software as a possible analogy. 

The first version of the General Public License (GPL) was released in 1989, and Linux was first released under the GPL in 1992. Software developers have had over three decades to get comfortable with open source software.  Like generative AI tools, open source software, can help to accelerate development cycles and time to market – yet the provenance of the software is not always clear.      We remember certain mergers and acquisitions early in our careers when merger agreements would include warranties that the target had never used any open source software; now those warranties reflect the reality of today’s software development to tease out the most potentially problematic uses of open source software.

Similarly, it is likely that vendors will find their development teams using generative AI tool and so it may be unrealistic to ban their use entirely.  A more practical and realistic approach may be to ensure that any use is carefully considered and that vendors take that use into account when making commitments about their software products:

Terms of the generative AI tool:  Before permitting use of a generative AI tool, a business may wish to carefully review the AI tool’s terms of use/service to ensure that there is a clear understanding of the rights grants by the business to the provider of the generative AI tool with respect to the business’ inputs and the rights the business obtains to the output.  Do the terms of use/service for the generative AI tool clearly state that the user owns the output?  What remedies does a user have if a third party alleges unauthorized use of content that resulted in the output?  In the Getty case, for instance, Getty Images is alleging that the creators of a popular AI tool violated its copyrights by scraping images from the Getty Images site without authorization.  Only after having thoroughly analyzed the terms of use/service for a generative AI tool can a business make an assessment about the level of risk it is willing to assume from use of the output.

Warranties:  When a vendor provides a technology solution or content generated entirely or in part with the use of a generative AI tool, which may affect the warranties the vendor is willing to offer.  The key here are whether the vendor is comfortable making a warranty about non-infringement, or about being the sole and exclusive owner of whatever it is providing, or whether it has the right to provide the product it is providing.  The solution may be multifaceted: providing one warranty for the portion that is developed by “human authorship” and another, different warranty (perhaps knowledge qualified) or no warranty at all for the portion that is output produced by a generative AI tool. 

A business may also wish to consider whether it includes an express warranty about any bias, errors or defects arising from the use of generative AI by the customer. 

Finally, a vendor should give due consideration to its approach to remedies.  Is the vendor able to fix a bug or error in the portion of the offering that is output of a generative AI tool?  Can the use of generative AI provide a competitive advantage to support requests? Perhaps an advantageous pricing model could be developed for such variables, setting out one set of prices for the product with the use of generative AI and one without.

Indemnities: Likewise, in many instances vendors will provide an indemnity against third-party claims of infringement if the vendor’s product or service is alleged to infringe a third party’s intellectual property.  Should a vendor assume such indemnity liability for output that is produced by a generative AI tool?  Is the cost saving to the vendor passed through to the customer, who, in turn, sees a discount on the fees it pays to the vendor? And is that discounted prices a basis on which the vendor should not provide an indemnity?  In other words, when customers want an indemnity for the entire solution, including the portion that is produced by generative AI, then they should pay a higher premium for the vendor to assume that risk.

The customer’s perspective

When acquiring technology or content, knowing the provenance or the origin of that technology or content is essential.  This is true whether you are a customer of a technology solution or acquiring other content from a third-party developer or acquiring a company through a corporate transaction such as a merger, stock purchase or asset purchase. 

Most large enterprises have procurement departments that typically require vendors to go through a sometimes rigorous vetting process that, in the case of a technology solution, may involve extensive technical and security due diligence.  Companies experienced in acquisitions have robust due diligence workstreams, including with respect to the technology and intellectual property of the target company being acquired.  However, given this major technological shift, the typical procurement and due diligence process should now be adjusted to deal with new issues raised by uses of generative AI.  How can customers or buyers protect themselves from the potential risks raised by the use of generative AI tools?  Here are some areas to ask about:

Internal policies:  Does the vendor or target company have any policies governing the use of generative AI tools?  The existence of internal policies may be an indication of the level of sophistication the vendor or target company has and the priority it places on ensuring that the vendor or company uses generative AI in a thoughtful manner to mitigate risk for itself as well as its customers.  Asking questions about the vendor’s or target company’s policies and practices regarding the use of generative AI can provide useful insights to a prospective customer or buyer.

Additional due diligence:  Asking questions about how a product or other material or content was developed could be helpful in identifying whether generative AI was used and whether this creates a corresponding risk for the customer or the buyer.

Training, bias and accountability:  Because bias may be introduced in the process of training an AI tool, obtaining information about the data used to validate and train AI models can help to identify whether the vendor or target company considered bias issues in the input.  Separately, consider asking specifically about whether any steps have been taken to address bias in the output because, even if the risk of bias in the input is low, the training algorithm itself could be biased, thus potentially tainting the output. 

The source of the data used to train a model can also raise issues because not all usage of that data may be authorized.  Free data sets like Common Crawl are supposed to scrape data only from sites that do not include prohibitions on scraping, but their compliance is not assured, and in fact, they provide the data on an “as is” basis.  In other words, a vendor or company uses the Common Crawl data at its own risk, and therefore a customer or buyer acquiring anything from a vendor that uses Common Crawl data uses that at their own peril. 

Warranties and indemnities:  One way to shift the burden back to the vendor or the sellers of the target company in an acquisition is through contract terms.  To the extent that vendors or target companies recognize the benefits of using generative AI in their go-to-market strategy, they should also assume the corresponding risk by making appropriate representations and warranties to customers and buyers.  Whatever is identified in the diligence and procurement vetting process should then be confirmed through appropriate representations and warranties in the underlying agreement with the vendor or target company. 

Consider including representations and warranties that focus on AI-related issues in addition to the usual warranties regarding intellectual property ownership and infringement and the use of third-party content.  Similarly, in a typical acquisition agreement, whether for a merger, stock purchase or asset purchase, consider whether the customary intellectual property warranties should be tailored for the business being acquired where use of generative AI has been identified through due diligence on the company and its product offering. 

Also, consider negotiating for a specific indemnity that addresses third-party claims related to unauthorized use of data sets to train AI algorithms in addition to the usual indemnity for intellectual property infringement.

It is in the interest of vendors, customers and acquirors to ensure that their contractual arrangements are structured such that the parties are able to realize the benefit of generative AI tools and that risks are reasonably and properly allocated between the parties.

In our next article in this series, we will review possible frameworks for managing the use of generative AI tools and offer suggestions for a business-centric policy that addresses certain legal issues that can arise from the use of generative AI in business functions.  

 

Print