|

Add a bookmark to get started

29 de noviembre de 20227 minute read

White House AI Bill of Rights may prompt agency rulemaking and legislation

The White House on October 4 released a document titled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that could prompt rulemaking and future legislation at the state, local and federal levels.

The blueprint issued by the White House Office of Science Technology Policy (OSTP) does not have the force of law.  However, in a blog post accompanying the Bill of Rights (BoR), its architects declare, “Nearly every person who spoke up shared a profound eagerness for clear federal leadership and guidelines to protect the public. This framework is an answer to those calls—and a response to the urgent threats posed to the American public by unchecked automated systems.”

Specifically, senior White House leaders call upon policymakers to “codify these measures into law or use the framework and its technical companion to help develop specific guidance on the use of automated systems within a sector.”

The BoR is the most in-depth AI instruction for government agencies to date, with marching orders to guide the design, use and deployment of automated systems.  While previous Executive Orders have outlined high-level ethical principles and instructed agencies to evaluate potential usages of AI, this document essentially serves as a “blueprint” for eventual agency rulemaking and regulation and informs industry of the basic parameters that will govern this technology. 

Unpacking the Blueprint for an AI Bill of Rights

In the BoR, White House officials provide “a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”

“It is intended to support the development of policies that protect civil rights and promote democratic values in the building, deployment and governance of automated systems,” the document states.

These “core protections to which everyone in America should be entitled” include:

  • Safe and effective systems.
  • Algorithmic discrimination protections to ensure that systems are designed and used in an equitable way.
  • Data privacy protections that are “built-in” and allow people better control over how data about themselves is used.
  • Notice and explanation, so that users are aware that automated systems are being used and understand their potential impact.
  • The right to opt out of automated decision-making “in favor of a human alternative, where appropriate” and to have “access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you.”

AI can help drive many important innovations with positive social benefits, OSTP recognizes, and is not itself the cause of discrimination and inequities. But “automated systems can replicate or deepen inequalities already present in society against ordinary people, underscoring the need for greater transparency, accountability, and privacy,” OSTP stated.

While the blueprint is intended to help “fill the gaps” in terms of guidance to protect the public, the administration is also pledging to improve enforcement of existing protections in a tandem approach.

To help guide and support technologists and entrepreneurs in developing trustworthy AI where considerations of fairness, safety and privacy are incorporated into the design and evaluation of AI products and services, the Department of Commerce’s National Institute of Standards and Technology (NIST) is developing a risk management framework and recommendations for operationalizing these considerations.

The White House is particularly focused on workers’ rights, noting that automated systems “have been used to surveil workers in the workplace, in some cases restricting their ability to organize.”

To protect consumers, the White House said the Federal Trade Commission (FTC) is exploring rules to curb commercial surveillance, algorithmic discrimination and lax data security practices. With regard to financial services, the administration highlighted stepped-up enforcement actions by the Consumer Financial Protection Bureau (CFPB) to require that “creditors provide consumers with specific and accurate explanations when credit applications are denied or other adverse actions are taken, even if the creditor is relying on a black-box credit model using complex algorithms.”

Further, the Department of Education is being called on to release recommendations on the use of AI for teaching and learning by early 2023.

Protecting patients from discrimination in healthcare and ensuring fair access to housing for renters as well as homebuyers and owners are among other administration priorities aimed at cracking down on how automated systems and models are designed and deployed.

The blueprint also envisions new policies and guidance for procuring and using AI products and services by federal government agencies, implementing policies to improve the transparency of and enhance public trust in government, and advancing US international leadership by example in adopting and following AI ethics principles.

As OSTP acknowledges, the blueprint is not the Biden Administration’s last word on policies for addressing the impacts, both positive and negative, of rapidly emerging AI technologies. A wide range of federal agencies will consider and develop new rules and guidance governing various sectors of the economy. Congress is encouraged to use the framework as a resource in codifying these or comparable principles into law.

But the White House said the principles are not targeted only to public-sector policymakers. Project managers, workers, parents and healthcare providers are among stakeholders encouraged to use the new framework in assessing AI systems and products and advocating to “ensure that innovation is rooted in inclusion, integrity, and our common humanity.”

The blueprint was developed after a nearly yearlong consultation with more than 20 federal agencies, as well as feedback from civil rights and civil society organizations, law enforcement, researchers, educators and tech companies large and small.

OSTP issued a request for information in October 2021 inviting interested parties to weigh in with their insights, ideas and experiences to inform the development of “a bill of rights for an AI-powered world.” The White House also issued a November 2021 request for information specifically on the use and governance of biometric technologies. A list of the organizations and individuals who provided comments is included as part of the appendix of the blueprint document.

The White House said its decision making was guided by questions such as how to “think about equity at the start of a design process, and not only after issues of discrimination emerged downstream,” and ways to “ensure that the guardrails to which we are entitled in our day-to-day lives carry over into our digital lives.”

DLA Piper established a formal Artificial Intelligence Practice in May 2019. DLA Piper's global Artificial Intelligence practice assists businesses in federal affairs and congress and helps organizations understand the legal and compliance risks arising from the creation and deployment of these emerging and disruptive technologies. Our AI team is composed of true thought leaders in this emerging field who have been recognized as producing some of the leading analyses of these issues. The group spans the globe, with particular depth in the United States, Canada and the United Kingdom.

For more information, please contact either of the authors.

Print