Add a bookmark to get started

1 de junho de 202311 minute read

FTC to increase focus on biometric information

Technologies using individuals’ physical characteristics face increased scrutiny

On May 18, 2023, the Federal Trade Commission (FTC or Commission) issued a Policy Statement on Biometric Information and Section 5 of the Federal Trade Commission Act (Policy Statement), signaling an intent to hold companies more accountable for their collection and use of consumers’ biometric information.

In an open Commission meeting, FTC Chair Lina Khan, Commissioner Rebecca Slaughter, and Commissioner Alvaro Bedoya voted unanimously to adopt the Policy Statement. Citing growing concerns about consumer privacy, data security, bias, and discrimination, the Policy Statement introduces an especially broad definition of biometric information that sets it apart from those provided in existing US laws and regulations, including the California Consumer Privacy Act (CCPA) and Washington state’s recently enacted My Health My Data Act.

While not legally binding, the Policy Statement makes clear that the FTC plans to exercise its discretionary authority to combat unfair or deceptive acts related to both the collection and use of consumers’ biometric information and the marketing and use of biometric information technologies.

Overview

In explaining this new FTC enforcement priority, the Policy Statement provides guidance on three key issues. First, it establishes a novel and expansive definition of “biometric information” that differs from how existing US laws and regulations use the term. Second, it outlines practices the FTC will scrutinize to determine whether companies using biometric information and/or biometric information technologies are complying with Section 5 of the Federal Trade Commission Act (FTC Act). And, finally, the Policy Statement emphasizes its alignment with and expansion of the Commission’s standing approach to regulating artificial intelligence (AI) technologies.

Definition of biometric information

In contrast to existing US laws and regulations, the Policy Statement defines the term “biometric information” to mean all “data that depict or describe physical, biological, or behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person’s body.”

The definition also covers “data derived” from the above – "to the extent that it would be reasonably possible to identify the person from whose information the data had been derived.”

Scrutinized practices by companies

The Policy Statement outlines how the Commission will consider whether a business’s use of biometric information or biometric information technology could be deceptive or unfair in violation of the FTC Act – and advises companies to:

  • Avoid false or unsubstantiated marketing claims. Businesses must have a reasonable basis (such as scientific or engineering tests) for any claims made regarding the validity, reliability, accuracy, performance, fairness, or efficacy of biometric information technologies.

  • Avoid deceptive statements about collecting and using biometric information. Statements may be deceptive if they do not contain all material information about how the business collects or uses biometric information or implements technologies using biometric information.

  • Assess foreseeable harms before collecting biometric information. Such assessment should include a holistic assessment of potential risks, that mirrors real world implementations, and take into account disproportionate harms to particular demographics of consumers.

  • Take proactive measures to mitigated known or foreseeable risks. Such measures include organizational and technical measures, such as updating relevant systems and implementing policies to restrict access to biometric information.

  • Clearly disclose the collection and use of biometric information. Engaging in surreptitious and unexpected collection or use of biometric information may be an unfair practice (eg, exposing the consumer to risks such as stalking, stigma, reputational harm, or emotional distress).

  • Provide a mechanism for addressing consumer complaints. Injuries to consumers may be compounded if there is no process for accepting and resolving disputes related to a business’s use of biometric information technologies.

  • Evaluate the practices and capabilities of third parties. Businesses should monitor and ensure third parties (eg, “affiliates and vendors”) comply with contractual requirements to minimize consumer risk, including through organizational and technical measures to supervise and audit compliance.

  • Conduct appropriate training for employees and contractors. Businesses should provide regular guidance and instruction to employees and contractors whose duties involve interacting with biometric information or biometric information technologies.

  • Monitor biometric information technologies. Businesses should ensure that deployed biometric technologies are functioning as anticipated, that users are operating them as intended, and that such uses are not likely to harm consumers.

The Policy Statement further recommends that, particularly in view of rapid changes in technological capabilities and uses, businesses should continually assess whether their uses of biometric information and biometric information technologies are likely to cause consumer injury in violation of Section 5 of the FTC Act – and, if so, to cease such practices.

Consistency with AI enforcement

Noting that biometric information technologies in some instances utilize algorithms and/or AI, the Policy Statement suggests that companies should consider the FTC’s previously announced positions on those topics – particularly around harnessing the benefits of AI without inadvertently introducing bias or other unfair outcomes.

The FTC’s AI guidance emphasizes that, in addition to the FTC Act, the Commission enforces two other laws important to developers and users of AI:


  • Fair Credit Reporting Act (FCRA). The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.

  • Equal Credit Opportunity Act (ECOA). The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

Commission publications have also advised companies on how to manage the consumer protection risks of AI and algorithms by using such technologies “truthfully, fairly, and equitably”:

  • Start with the right foundation. Companies should consider how to improve their data sets, design their AI models to account for data gaps, and, in light of any shortcomings, limit where or how to use such models.

  • Assess for discriminatory outcomes. Companies must test their algorithms – before deployment and regularly thereafter – to avoid discriminating on the basis of race, gender, or other protected classes.

  • Embrace transparency and independence. To prevent bias, companies should employ transparency frameworks and independent standards, conduct and publish independent audit results, and open their data or source code to outside inspection.

  • Do not exaggerate algorithms’ capabilities. To avoid deception or discrimination in the rush to deploy new technology, companies should take pains not to overpromise what their algorithms can deliver.

  • Tell the truth about data use. Companies should inform consumers about how they obtain the data that powers their model – or risk a potential order to delete not only ill-gotten data, but any model trained with such data.

  • Do more good than harm. Companies should ensure that their models are not likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.

Implications and takeaways

The FTC’s nonbinding Policy Statement signals the Commission’s intention to challenge uses of biometric information and biometric information technologies that it determines to be deceptive or unfair in violation of the FTC Act. Key takeaways from this newly announced focus include:


  • Expansive definition of biometric information. With the Policy Statement, the FTC adopts an extremely broad understanding of biometric information that requires an analysis that deviates from any existing US law or regulation.

  • Data governance model. Through its list of factors that the FTC will consider in determining whether a business’s use of biometric information or biometric information technology could be deceptive or unfair, the Policy Statement effectively advises businesses to adopt a data governance model in line with the European Union’s General Data Protection Regulation and the CCPA. Additionally, in view of the growing integration of AI with biometric identification and authentication systems, the Policy Statement encourages companies to mitigate the consumer protection risks of AI and algorithms by developing and deploying such technologies in a manner consistent with the Commission’s guidance on how to do so truthfully, fairly, and equitably.

  • Ongoing monitoring. The Policy Statement explains that the FTC expects companies to conduct ongoing monitoring, through organizational and technical measures, to ensure that biometric information technologies are functioning as anticipated and not likely to harm consumers. Such monitoring obligations include tracking the practices and capabilities of third parties, such as affiliates and vendors who have access to biometric information or biometric information technologies.

  • No private right of action. Unlike several state and local laws, there is no private right of action under Section 5 of the FTC Act. The nonbinding Policy Statement does nothing to change that fact.

  • FTC’s power under growing scrutiny. In recent years, the FTC has sought to aggressively push the limits of its congressionally authorized authority (see, eg, Chair Khan’s “commercial surveillance” rulemaking). Given the Supreme Court’s ruling on April 14, 2023 in Axon Enterprises, Inc. v. Federal Trade Commission (holding that US district courts have jurisdiction to question the constitutionality of the Commission’s enforcement actions) and in light of the Court’s announcement on May 1, 2023 that it will reconsider the Chevron doctrine (the four-decade-old precedent that courts should defer to reasonable agency interpretations of ambiguous provisions in congressional statutes), it would not be surprising to see the Court weigh in on the constitutional limits of the FTC’s policy statements if called upon to do so.

For more information, please contact the authors or your DLA Piper relationship attorney.

Print