Add a bookmark to get started

19 de setembro de 20237 minute read

Copyright Office requests public comments for new AI policy study: an opportunity to shape AI policy

Congress is focusing on whether and how to regulate AI, and so is the Copyright Office. The Copyright Office recently issued a comprehensive notice of request for public comment (Request) to inform its new “study of the copyright law and policy issues raised by artificial intelligence (AI).”

The Request, released on August 30, 2023, includes dozens of questions, and the issues in it broadly include (1) transparency, licensing, and fair use considerations for use of copyrighted works to train AI models; (2) copyrightability of AI-generated outputs; (3) whether AI models and output may infringe copyright and/or protections for copyright management information; and (4) whether a federal right of publicity law or other protections should be put in place to prohibit unauthorized creation of AI-generated outputs that imitate the style of a human creator.

Your opportunity to shape the AI policy discussion

The Request will inform the Copyright Office’s AI Study and builds from the Office’s March 2023 copyrightability guidance and public listening sessions concerning the copyright implications of AI for literary works (including software), visual arts, audiovisual works, and music. Given the rapid development of risks and opportunities associated with AI in all sectors, the new AI Study appears to be on a fast track. Members of Congress asked the US Copyright Office and US Patent and Trademark Office to provide a report to Congress with policy recommendations by December 31, 2024. In keeping with this timeline, written comments are due no later than October 30, 2023 and reply comments are due no later than November 29, 2023.

The Request is relevant to all companies that now or in the future are considering whether and how to engage with AI – to use, create, or protect content. Submitting comments to questions relevant to your business may provide an invaluable opportunity to inform Washington, DC about how various AI technologies work and where AI could take the content industries in the future. In short, if your company wants a say in AI policy in the content industry, this comment opportunity should not be missed.

Summary of key terms and topic areas in the Copyright Office request

Below is a brief overview of the key areas for public comment in the Request.

Key terms

Of particular relevance to companies creating AI models and customer-facing applications, the Copyright Office has proposed certain definitions for the terms below within the Request. These terms are not definitive, and companies with technical expertise in these areas should consider submitting comments to help the Copyright Office confirm or refine these initial definitions.

  • Artificial Intelligence (AI)
  • AI Model
  • AI System
  • Generative AI
  • Machine Learning
  • Training Datasets
  • Training Material
  • Weights

Key topics

In addition to dozens of more detailed questions, and a general catchall for any issues not mentioned in the Request, the Copyright Office poses five general questions that ask:

  1. Is new legislation warranted to address copyright or related issues with generative AI – and if so, what should it entail?
  2. What are the potential benefits and risks of generative AI technology, including how its use is currently affecting or likely to affect creators, copyright owners, technology developers, researchers, and the public?
  3. Does the increasing use or distribution of AI-generated material raise any unique issues for your sector or industry as compared to other copyright stakeholders?
  4. Are there any papers or studies that are relevant to the AI Study, including papers that address the “economic effects of generative AI on the creative industries or how different licensing regimes do or could operate to remunerate copyright owners and/or creators for the use of their works in training AI models”?
  5. Is international consistency important and are there any statutory or regulatory approaches outside the US that should be considered or avoided in the US?

Beyond the five general topics above, the Request includes four additional categories consisting of numerous nuanced questions, which are summarized below.

Training of AI

Unsurprisingly given the current litigation surrounding creation and use of various training sets, the Request poses numerous questions about training sets, including: (i) whether copyright owners should have to opt-in or opt-out of use in training sets; (ii) under what circumstances is creation and use of a training set fair use; and (iii) what impact would a licensing requirement have on the development of AI systems?

Transparency and recordkeeping

Potentially less exciting than fair use questions, but no less important, are a few questions in the Request related to transparency and recordkeeping, including (i) whether developers of AI tools and training sets should “be required to collect, retain, and disclose records regarding the materials used to train their models” and (ii) whether there should be obligations to notify copyright owners that their works have been used in AI training.

Generative AI outputs

Copyrightability: Critical to many companies’ consideration of whether to use generative-AI in commercial content, the Request includes questions about the copyrightability of AI-generated output, among them (i) are any revisions to the Copyright Act necessary to address copyrightability; (ii) should AI-generated output be protectable (and if so, should it be more limited than full term copyright protection); and (iii) are there circumstances when a human using a generative AI system should be considered the ‘‘author’’ of material?

Infringement: Relevant to all companies – technology developer or content creator – are the Request’s questions on whether AI-generated output may infringe rights in existing works, including (i) whether outputs can infringe existing works; (ii) whether outputs from AI models trained on copyrighted content can violate copyright management rights under Section 1202 of the Copyright Act; (iii) who should be liable for infringing outputs (eg, AI developers and/or end users); and (iv) whether existing civil discovery rules are sufficient for obtaining materials relevant to assessing if an AI model accessed an existing copyrighted work.

Labeling or identification: Last in the generative-AI output topic, but certainly not least given concerns about accelerating fake news, the Request includes questions on whether AI-generated output should be labeled, including (i) whether new laws should require AI-generated material to be labeled – and, if so, who should label the output and how would the labeling work; and (ii) what existing tools are available or in development to accurately identify AI-generated outputs?

Related to copyright (but not copyright)

Following on the heels of high-profile fakes and a May 2023 Congressional hearing during which there was substantial discussion of the name, image, and likeness rights that may be implicated by unauthorized generative-AI outputs, the Request includes questions regarding (i) whether Congress should establish a new federal right of publicity that would apply to AI-generated outputs and (ii) whether there are or should be other protections against an AI system generating outputs that imitate the artistic style of a human creator (such as an AI system producing works ‘‘in the style of’’ a specific artist)?


DLA Piper is here to help

DLA Piper’s award-winning teams in AI, media, and entertainment, and all aspects of intellectual property partner with businesses to strategically manage the opportunities and risks associated with AI in all stages. If your company wants to know more about submitting comments to the AI Study or is seeking to get up to speed on all things AI, copyright, and media, please reach out to any of the authors of this article: Gina Durham, Rachel Fertig and Danny Tobey, and to find out more about our teams, please visit our AI, Copyright, and Media team pages.

Print