|

Add a bookmark to get started

26 de abril de 202410 minute read

Congressional Federal Data Privacy Proposal sets broad mandates for AI

On April 7, 2024, Senate Commerce Committee Chair Maria Cantwell and House Energy and Commerce Committee Chair Cathy McMorris Rodgers unveiled the American Privacy Rights Act of 2024 (APRA), a comprehensive privacy bill that, if enacted, would set significant data privacy and security standards across the US. While significant focus is placed on data privacy, ARPA includes several impactful provisions that would affect organizations and entities developing and utilizing artificial intelligence.

This article details many of these proposals from the perspective of AI. Further information on the wider proposal can be found in DLA Piper’s full analysis.

Convening standardized definitions

The bill deals with several complex aspects of technology and data. With this, it has included comprehensive definitions to ensure minimal confusion in its application. Key examples of terms used throughout the provisions include the following:

  • Covered Algorithm” is defined in the legislation as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision-making by using Covered Data, which includes determining the provision of products or services or ranking, ordering, promoting, recommending, amplifying, or similarly determining the delivery or display of information to an individual.”

  • Covered Data’’ refers to “information that identifies or is linked or reasonably linkable, alone or in combination with other information, to an individual or a device that identifies or is linked or reasonably linkable to 1 or more individuals.”

  • Consequential Decision’’ is defined as “a determination or an offer, including through advertisement, that uses Covered Data and relates to (1) an individual’s or a class of individuals’ access to or equal enjoyment of housing, employment, education enrollment or opportunity, healthcare, insurance, or credit opportunities; or (2) access to, or restrictions on the use of, any place of public accommodation.”

  • Covered Entity” refers to any entity that determines the purpose and means of collecting, processing, retaining, or transferring Covered Data and is subject to the FTC Act or is a common carrier (as defined by the Communications Act of 1934) as well as certain nonprofits. The APRA excludes the following entities from the definition of Covered Entity: Small Businesses, government agencies, entities working on behalf of governments, the National Center for Missing and Exploited Children (NCMEC), and fraud-fighting non-profits (excepting data security obligations).

  • Large Data Holders” are Covered Entities that have $250 million or more in annual gross revenue and collect, process, retain, or transfer the Covered Data of (subject to exclusions) (i) more than 5 million individuals; 15 million portable connected devices that identify, are linked, or reasonably linkable to 1 or more individuals; and 35 million similarly linkable connected devices, or (ii) the Sensitive Covered Data of more than 200,000 individuals, 300,000 linkable portable connected devices, and 700,000 linkable connected devices.

  • "Sensitive Covered Data” is a subset of Covered Data that includes government identifiers; health, biometric, and genetic information; financial account and payment data; log-in credentials; private communications; and a wide range of other personal data or information about people’s race, ethnicity, national origin, religion, or sex; and other data the FTC defines as Sensitive Covered Data by rule.

  • Small Businesses” are defined as those with average gross annual revenue below $40 million and on average processed data of 200,000 or fewer people per year.

Algorithmic impact and design assessments

A key feature of APRA, Section 13 – entitled, “Civil Rights and Algorithms” – mandates that Large Data Holders who use covered algorithms “in a manner that poses a consequential risk of harm” conduct an algorithm impact assessment, which would be made available to federal authorities and the general public.

The scope of the Impact Assessment is broad, including:

  • Providing a detailed description of the design process and methodologies of the Covered Algorithm, and a statement of its purpose and proposed uses
  • A detailed description of the data used by the Covered Algorithm, including the specific categories of data that will be processed as input and any data used to train the model that the Covered Algorithm relies on
  • A description of the outputs produced by the Covered Algorithm
  • An assessment of the necessity and proportionality of the Covered Algorithm in relation to its stated purpose, and
  • A detailed description of steps the Large Data Holder has taken or will take to mitigate potential harms from the Covered Algorithm to an individual or group of individuals, including to individuals under the age of 17 and discriminatory impacts.

Under the text of the draft legislation, the APRA imposes additional obligations on Covered Entities (including nonprofits and governmental entities) and service providers that design a Covered Algorithm that is designed, solely or in part, to collect, process, or transfer Covered Data in furtherance of a consequential decision.[1] Such an entity would also be required to conduct an Algorithm Design Evaluation prior to deployment. The Algorithm Design Evaluation would need to include an analysis of the design, structure, and inputs of the Covered Algorithm, including any training data used in its development, in order to reduce the risk of potential harms to individuals under the age of 17 and to individuals due to discriminatory impacts.

The draft legislation places ultimate enforcement authority in the hands of the Federal Trade Commission (FTC), though state law enforcement and regulators would also play a significant enforcement role.

Covered Entities would have two years to conduct the initial Impact Assessment and would then be required to provide assessments on an annual basis. Such entities would also have two years to complete their Design Evaluations. The FTC would be expected to issue rules and guidance on compliance with the new Impact Assessment and Design Evaluation requirements should the bill proceed.

The bill does grant Covered Entities and service providers several organizational protections. For example, these Covered Entities have the ability to “focus the impact assessment or evaluation on any Covered Algorithm, or portions of a Covered Algorithm, that will be put to use and may reasonably contribute to the risk of the potential harms”. Furthermore, those affected by this provision of the legislation are permitted to redact and segregate trade secrets or other confidential or proprietary information from public disclosure when submitting their disclosures.

Other notable measures

Section 13 of the draft bill, titled “Civil Rights and Algorithms,” also prohibits the collecting, processing, retaining, or transferring of Covered Data in a manner that discriminates on the basis of race, color, religion, national origin, sex, or disability (with exceptions for self-testing to prevent unlawful discrimination, diversifying an applicant or customer pool, or advertising economic opportunities or benefits to underrepresented populations).

Section 14 of the draft bill, “Consequential Decision Opt Out,” is a substantial departure from other sections of the APRA. This section governs the use of Covered Algorithms that make or facilitate a “Consequential Decision,” defined for this section alone as “a determination or an offer, including through advertisement, that uses Covered Data and relates to (1) the access of an individual or class of individuals to or equal enjoyment of housing, employment, education enrollment or opportunity, healthcare, insurance, or credit opportunities; or (2) access to, or restrictions on the use of, any place of public accommodation.” Notably, the obligations in this section apply to any entity, including Small Businesses and entities not otherwise regulated by the APRA. Such an entity must provide notice to individuals about the use of a Covered Algorithm to make or facilitate a Consequential Decision as well as an opportunity for individuals to opt out. The notice must be “clear, conspicuous, and not misleading” and “provide meaningful information about how the Covered Algorithm makes or facilitates a Consequential Decision, including the range of potential outcomes.”

Regulatory gray areas in training of algorithms

As noted above, the assessment and evaluation requirements do include language relating to data used for training models for artificial intelligence. Some observers, however, see a potential “gray area” on whether and how people’s data can be used to train AI models.

The bill does include data minimization provisions, limiting covered entities and service providers to collecting and using only as much covered data as is necessary for a reasonable purpose. However, the bill’s language leaves open the question of whether AI training models, which use millions of publicly available data points, would be required to receive specific permission to use such data for training, or would otherwise be limited as to how this data can be used.

Next steps

Chair Cantwell and Chair McMorris Rodgers are the lead sponsors of the proposed legislation.

This newly announced bill is reminiscent of the American Data Privacy and Protection Act, which passed out of the House Energy and Commerce Committee with bipartisan support in 2022. The bill did not advance past this stage, and a companion bill was not later introduced in the Senate.

APRA is yet another example of mandates proposed in the US requiring increasingly comprehensive governance of AI and data. Impact assessments, for example, have been referenced throughout recent governmental updates, including the OMB memorandum on responsible AI in government. This closely aligns with international approaches, including those in the EU, which have emphasized the necessity of comprehensive transparency and pre-deployment assessment measures as a means of maintaining the safety of users and individuals.

How DLA Piper can help

DLA Piper’s award-winning AI practice is routinely recognized as a leader in the field of law and AI, leading Insider’s list of top AI lawyers, American Lawyer’s 2024 Best Law Firm Use of AI, and 2023 Financial Times’s Innovative Lawyers in Technology. Our combined team of lawyers and data scientists helps clients navigate the intricacies of AI governance, innovation, and risk management, helping AI systems not only comply with current mandates but also anticipate future regulatory developments, including those called out by the memorandum. DLA Piper’s AI policy team in Washington, DC is led by the Founding Director of the Senate Artificial Intelligence Caucus.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s Focus on Artificial Intelligence. Watch our AI Chatroom series for additional insights on AI adoption.

For further information or if you have any questions, please contact any of the authors.


[1] The bill does not define “Consequential Decision” in Section 13 (or in the Section 2. Definitions), instead referring back to the requirement that an algorithm impact assessment includes a review of ways to mitigate discriminatory harms and harms to individuals under 17. But note, as covered below, Consequential Decision is specifically defined in Section 14.

Print