26 February 20248 minute read

Generative AI tools

Generative AI (GenAI) is a fascinating branch of AI that can produce original and creative content.

It uses complex algorithms and neural networks to learn from data and generate outputs that mimic human-like creativity. The data used to train GenAI tools can be text, images, audio, video or other types of content.

GenAI is a powerful technology that can revolutionize various industries and domains, such as content creation, design, art and software development. It can also improve human productivity and collaboration and has potential to bring societal benefits, economic growth and enhance innovation and global competitiveness.

But GenAI poses quite significant challenges and risks; it’s commonly acknowledged that GenAI tools blur the boundaries between originality and derivation, authorship and ownership, fair use and infringement. Further, they raise concerns such as accuracy, reliability, security and privacy.

Both GenAI providers and users are facing lawsuits, stemming from unfair training or improper use of GenAI.

 

Recent claims for copyrights infringement by GenAI tools in the US

Over the last year many content creators and owners (such as writers, visual artists but also source-code owners) have launched lawsuits against GenAI software companies in the US, mainly claiming copyrights infringement.

Several US novelists (including famous author John Grisham) sued Open AI (who developed Chat GPT) back in September 2023, contesting “systematic theft on a mass scale” as ChatGPT was allegedly trained using their works without permission.

Google was sued based on alleged infringements involving computer chips. And some music publishers claimed in court that they’d been irreparably harmed by “Claude,” a chatbot produced by software company Anthropic, accused of illegitimate AI training on music lyrics.

Recently, The New York Times also filed a lawsuit both against Microsoft and Open AI, contesting that ChatGPT “rely on large-language models … that were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more” and seeking “to hold them responsible for the billions of dollars in statutory and actual damages that they owe for the unlawful copying and use of The Times’s uniquely valuable works.”

Some of these US copyrights infringement claims were settled and the content creators agreed to license their intellectual property rights to the GenAI software companies for a fee.

Copyrights license agreements with the owners of the contents used to train GenAI tools will probably be the answer for software companies to mitigate the risks of copyrights infringement.

But this might not be enough to prevent GenAI software companies from being targeted with copyright infringement-related claims. A verification system proving that GenAI tools are fairly trained could also be implemented to help ensure responsible use of GenAI products. And specific initiatives to “evaluate and certify artificial intelligence products as copyright-compliant, offering a stamp of approval to AI companies that submit details of their models for independent review” are being discussed in the US, Bloomberg reported.

Even verification systems may have their own limitations and pose challenges though, considering that they would in turn be based on AI tools.

 

GenAI tools also raise significant issues for (direct and indirect) users

GenAI tools also raise concerns for those who help spread or rely on the product of a GenAI tool.

In April 2023 the song “Heart on My Sleeve” by music artists Drake and The Weeknd became viral on Spotify and YouTube; too bad the song was neither written nor sung by them but rather generated by an unknown TikToker through a GenAI tool.

The song was removed from the streaming platforms as soon as the scam was figured out; if you look it up on YouTube you find that it is “is no longer available due to a copyright claim by Universal Music Group (UMG),” Drake’s record label. According to UMG, “platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists,” which might suggest that UMG is considering taking action against those who allowed the spreading of the bogus song, such as YouTube and Spotify.

Around the same time, two lawyers used ChatGPT to defend a case in the US. Unfortunately for them, the court precedents that the AI tool had indicated turned out to be completely made up, so they presented six fictitious case citations to the judge. In AI language this is called “hallucination” and specifically refers to misleading or false information generated by AI tools.

As a result of this hallucination, the judge imposed a USD5,000 sanction on the lawyers for having acted in bad faith and made “acts of conscious avoidance and false and misleading statements to the court”; the client might also sue the lawyers for negligence, as they relied on the inaccurate results obtained from the GenAI tool, possibly without adequately informing him about its use in preparing the case.

 

GenAI tools, data protection and privacy

Last but not least, GenAI tools also raise issues in the matter of data protection and privacy in several ways.

The use of GenAI tools may result in unauthorized access or disclosure of sensitive information or unauthorized data sharing; GenAI tools may share personal data with third parties without explicit consent or for purposes beyond what was initially communicated.

GenAI tools may also inadvertently perpetuate biases present in the training data, which could affect the privacy and dignity of individuals or groups. For example, a GenAI tool that generates facial images may produce images that are skewed towards certain races, genders or ages, possibly resulting in discriminatory behaviors.

 

Gen AI tools’ users should be responsible and cautious

Using GenAI tools may produce inaccurate and misleading content (which might have serious consequences for users) or expose sensitive or confidential data to unauthorized access or misuse (compromising the security and privacy both of users and of third parties).

Those who use or plan to use GenAI tools should be aware of these issues and adopt appropriate policies and practices, to make sure that:

  • the accuracy and reliability of the content generated by GenAI tool is verified and the sources and references for the created content are provided;
  • data and systems used by GenAI tools are properly secured and protected;
  • ethical and responsible principles when using GenAI are followed, so the content generated by the GenAI tools is fair.

But GenAI users seem to be far from achieving a “safe” use of GenAI tools. Based on a recent survey DLA Piper conducted with its clients, 96% of the interviewed companies are rolling out AI tools in some way. However, 71% of the interviewees described themselves as mere “explorers” in the field, which might suggest they’re not fully aware of the Gen AI tools-related risks and how to prevent them.

 

What about insurers? Do insurance policies available on the market cover liability connected to GenAI tools and their use?

The insurance world cannot ignore the risks and challenges that the GenAI presents, considering that they are among the most significant ones that will be encountered in the near future.

Indeed, insurers of both GenAI providers and users will now be called to cover claims of the kind described so far.

As for GenAI providers, the PI policies available to them respond in the case of claims based on malfunctioning of the chatbot, eg due to algorithms’ flaws or bias in the machine learning models.

Coverage for copyrights infringement claims like The New Your Times’ one could possibly be questioned by the insurers of GenAI tools providers based on exclusions provided in relation to this specific kind of claims. Or they could argue that the training of GenAI tools without copyright permission can only be intentional and therefore not covered.

As for GenAI users, depending on the type and extent of the loss, different insurance policies may be triggered in connection with the liabilities deriving from the use of GenAI tools. For example, cyber insurance may cover the (mis)use of protected data, general liability insurance may cover copyright infringement claims and professional liability insurance may cover errors or omissions deriving from (unreliable or inaccurate) GenAI products.

Property policies might also be triggered, eg in case company machines are somehow damaged due to incorrect instructions given to them by GenAI tools, or in relation to business interruption coverage.

However, there may also be gaps or uncertainties in the coverage, as some insurance policies were not designed for the specific risks relating to GenAI and might have wording which does not “fit” the new risks.

The policies existing on the market should probably be reviewed to better define the scope and limitations of the coverage they provide and avoid ambiguities.

Interestingly, some insurers are already offering a “new” insurance product which is specifically engaged when AI solutions do not perform as promised, indemnifying the client of the GenAI tools’ provider. Depending on the extent of coverage (which may potentially include third-party claims against the client due to the improper performance of the AI solution), this “new” AI policy could possibly overlap with the PI policy of the client of the GenAI tools’ provider itself.

The insurance market will be faced with major challenges, both in terms of creation of insurance products that meet these upcoming needs and of coordination between new and existing ones.

Given the various and significant risks that may arise from the use of GenAI tools, PI insurers should consider investigating their use by the insured specifically to accurately assess the risk they’re taking on.

Print