|

Add a bookmark to get started

19 de diciembre de 202414 minute read

House AI Task Force unveils report with focus on sectoral regulatory framework

An AI blueprint for future congressional action

The bipartisan task force on artificial intelligence (AI) in the US House of Representatives on December 17, 2024 released its highly anticipated report providing guiding principles, recommendations, and policy proposals to promote continued American leadership in responsible AI innovation, while mitigating potential harms from the misuse of the emerging technology.

The report represents a consensus-driven final work product of the 24-member task force, which has a membership equally divided between Democrats and Republicans.

Background of the AI Task Force

Launched in February 2024 by House Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY), the task force is led by co-chairs Jay Obernolte (R-CA) and Ted Lieu (D-CA). Its members are drawn from 20 committees to ensure comprehensive jurisdictional responsibilities to cover the range of potential policy implications posed by AI.

The 273-page report includes 66 key findings and 89 recommendations, organized into 15 chapter headings, including government use of AI, intellectual property, energy usage, and data centers. The energy issue, in particular, is expected to receive significant attention in the next Congress.

Although the report will likely not influence legislation in the remaining days of the current Congress, it is intended to offer guidance on AI legislative priorities in the coming year(s).

At a press conference announcing publication of the report, Chair Obernolte frequently used phrases such as “roadmap” and “a path forward.” In the introduction, the report is described as “a blueprint for future actions,” intended to provide a “a thoughtful long-term vision for AI in our society.”

A sectoral approach leveraging existing agencies

At the outset of the press conference, Obernolte stressed that a key feature of the report is its preference for sectoral regulation – a point echoed by members from each political party.

“We think that our sectoral regulators have the knowledge and experience needed to regulate AI within their sectoral spaces,” Obernolte said, while noting that agencies may require additional resources, technical talent, and evaluation standards from the federal government to address the increased regulatory workload.

Obernolte pointedly disagreed with regulatory frameworks in other countries (though he did not cite any by name) that he described as “splitting off AI,” creating new bureaucracies, and universal licensing requirements.

“In America, we believe in regulating outcomes, not tools,” he added. Noting that there are already laws against cybertheft, for example, Obernolte stated that new laws were not needed, but that Congress should work to empower law enforcement authorities.

Obernolte also pointed out that agencies are already regulating AI in their sectoral domains. He cited examples such as the Food and Drug Administration (FDA), which he said has issued nearly 1,000 permits for the use of AI in medical devices, as well as National Highway Traffic Safety Administration (NHTSA) regulations on autonomous vehicles, and Federal Aviation Administration (FAA) rules on AI-powered aviation technologies.

A major takeaway of the report is that there is a bipartisan consensus among AI-focused lawmakers to avoid reinventing the wheel, demonstrating their support for public-private partnerships, and the protection of the innovation ecosystem. Although there were suggestions of creating an entirely new agency, or “Department of AI,” it seems that the US will likely utilize existing laws and structures, and modernize those laws based on agency and AI application.

Principles: Emphasis on incrementalism

The report’s seven high-level principles include the recommendation that Congress should refrain from creating a grand regulatory framework all at once, given the rapid pace of technological advancements that will require more flexibility and vigilance.

Obernolte also noted that under the rubric of AI, some issues demand more urgent attention than others. For instance, he and several other task force members highlighted the problem of non-consensual intimate imagery as something requiring more immediate action.

As the report states, “Congress should adopt an agile approach that allows us to respond appropriately and in a targeted, achievable manner that benefits from all available evidence and insights. Supporting this agile paradigm requires continual learning and adaptation. Congress should regularly evaluate the effectiveness of its policies and update them as AI technologies and their impacts evolve.”

Underscoring the bipartisan nature of the task force, the report recognizes a series of actions on AI policy undertaken by both the Biden Administration and the first Trump Administration. It also provides a compilation of AI policy actions taken to date by both administrations and Congress.

The report does not endorse any specific pieces of legislation.

At the press conference, Obernolte stated that the task force has met with President-elect Donald Trump’s technology transition team, as well as David Sacks, the incoming administration’s AI coordinator. Obernolte said that it is not clear whether the task force will continue in the next Congress, and suggested the possibility of a new special committee that could play a coordinating role in shepherding legislation through the standing policy committees.

Summary of findings and recommendations

As noted, the report is divided into 15 main sections. Below, we look at some of the report’s findings and recommendations to consider:

Government use

  • Findings: “The federal government should utilize core principles and avoid conflicting with existing laws,” “The federal government should be wary of algorithm-informed decision-making,” and “Policies governing agency use of AI should provide holistic, operations-focused guidance spanning the AI lifecycle to enable efficient agency implementation. AI systems can have multiple applications for mission- or programmatic-specific use cases.

  • Recommendations: Flexible governance, reduced administrative burden and bureaucracy, supporting the National Institute of Standards and Technology (NIST) in developing guidelines for federal AI systems, and improving cybersecurity of federal systems, including federal AI systems.

Federal preemption of state law

  • Findings: “Preemption can allow state action subject to floors or ceilings.”

  • Recommendations: “Study applicable AI regulations across sectors.”

Data privacy

  • Findings: “Americans have limited recourse for many privacy harms,” “Federal privacy laws could potentially augment state laws,” and “AI has the potential to exacerbate privacy harms.”

  • Recommendations: “Explore mechanisms to promote access to data in privacy-enhanced ways,” “Ensure privacy laws are generally applicable and technology-neutral,” and “ Congress can also support partnerships to improve the design of AI systems that consider privacy-by-design and utilize new privacy-enhancing technologies and techniques.”

National security

  • Findings: “AI is a critical component of national security,” and “AI can vastly improve Department of Defense business processes.”

  • Recommendations: “Continue oversight of autonomous weapons policies” and “Support international cooperation on AI used in military contexts.”

Research, development, and standards

  • Findings: “Federal investments in fundamental research have enabled the current AI opportunity,” “A closed AI research ecosystem could limit US competitiveness in AI,” and “There is often a wide gap between the basic research conducted at universities and the commercialization activities carried out by industry.”

  • Recommendations: “Increase technology transfer from university research and development (R&D) to market,” “Promote public-private partnerships for AI R&D,” “Uphold the US approach to setting standards,” “Align national AI strategy with broader US technology strategy,” and “Federal science agencies should facilitate access to their computational resources and promote greater availability of their data.”

Civil rights and civil liberties

  • Findings: “Improper use of AI can violate laws and deprive Americans of our most important rights,” “Understanding the possible flaws and shortcomings of AI models can mitigate potentially harmful uses of AI,” and “A core consideration should be mitigating harmful outcomes impacting American’s civil rights and civil liberties.”

  • Recommendations: “Have humans in the loop to actively identify and remedy potential flaws when AI is used in highly consequential decision-making,” “Agencies must understand and protect against using AI in discriminatory decision making,” and “Improved private sector engagement in, and the development of, industry-led technical standards could help provide a rigorous technical basis to guide the proper use of AI systems in decision-making.”

Education and workforce

  • Findings: “Fostering domestic AI talent and continued US leadership will require significant improvements in basic science, technology, engineering, and math (STEM) education and training,” and “K–12 educators need resources to promote AI literacy.”

  • Recommendations: “Invest in K–12 STEM and AI education and broaden participation,” “Support the standardization of work roles, job categories, tasks, skillsets, and competencies for AI-related jobs,” and “Monitor the interaction of labor laws and worker protections with AI adoption.”

Intellectual property (IP)

  • Findings: “It is unclear whether legislative action is necessary in some cases, and a number of IP issues are currently in the courts,” “While some use cases are legitimate and protected forms of expression, the proliferation of deepfakes and harmful digital replicas is a significant and ongoing challenge,” and “It will be vital to avoid overreach and understand the potential costs and benefits as much as possible. Any new IP-related legislation or regulations should target specific known issues or problems; tailor definitions, requirements, and consequences narrowly; reduce uncertainty rather than increase it; and focus on improving the ability of the private sector to innovate and creators to thrive.”

  • Recommendations: “Clarify IP laws, regulations, and agency activity,” and “Appropriately counter the growing harm of AI-created deepfakes.”

Content authenticity

  • Findings: “There is currently no single, optimal technical solution to content authentication,” and “Digital identity technology allows a person online to verify who they are and reduces fraud.”

  • Recommendations: “Address demonstrable harms, not speculative harms of synthetic content,” “Identify the responsibilities of AI developers, content producers, and content distributors when it comes to synthetic content,” “Examine existing laws related to harmful synthetic content,” “Ensure victims have the necessary tools,” “Congress should work with industry to support a standardized ecosystem for technical solutions to synthetic content, such as the standardization of these technical solutions, whether through pre-standardization research, public-private partnerships, direct engagement in international standard setting, or the development of voluntary standards and guidelines for addressing synthetic content,” and “Congress should also explore whether to authorize activities to support government adoption and interagency coordination for a particular technical solution to content authentication.”

Open and closed systems

  • Findings: “Open models offer many benefits, including customization, transparency, and accessibility. However, there is an increased risk that malicious actors could use open models to cause harm, including perpetrating financial fraud, threatening national security, or large-scale identity theft.”

  • Recommendations: “Focus on demonstrable harms and physical threats,” “Evaluate chemical, biological, radiological, or nuclear (CBRN) threats in light of AI capabilities,” and “Continue to monitor the risks from open-source models.”

Energy usage and data centers

  • Findings: “The growing demands of AI are creating challenges for the grid,” “Continued US innovation in AI requires innovations in the energy sector,” “Planning properly now for new power generation and transmission is critical for AI innovation and adoption,” “AI tools will play a role in innovation and modernization in the energy sector,” and “Despite their promise, widespread deployment of new nuclear power is not a near-term solution due to the long lead times required to license and construct a first-of-a-kind nuclear power plant.”

  • Recommendations: “Support and increase federal investments in scientific research that enables innovations in AI hardware, algorithmic efficiency, energy technology development, and energy infrastructure,” “Strengthen efforts to track and project AI data center power usage,” “Create new standards, metrics, and a taxonomy of definitions for communicating relevant energy use and efficiency metrics,” “Ensure that AI and the energy grid are a part of broader discussions about grid modernization and security,” “Ensure that the costs of new infrastructure are borne primarily by those customers who receive the associated benefits,” and “Promote broader adoption of AI to enhance energy infrastructure, energy production, and energy efficiency.”

Small business

  • Findings: “Small businesses can lack sufficient access to capital and AI resources,” and “Small businesses face excessive challenges in meeting AI regulatory compliance.”

  • Recommendations: “Provide resources for small business AI adoption,” “Ease compliance burdens for small businesses,” and “The National Artificial Intelligence Research Resource (NAIRR) pilot can facilitate AI adoption by small businesses by providing difficult-to-acquire data and computing resources.”

Agriculture

  • Findings: “Lack of reliable network connectivity in rural and farming communities impedes AI adoption in the agricultural sector,” “Greater adoption of AI at the US Department of Agriculture (USDA) could enhance delivery of numerous agriculture programs and reduce costs for farmers and others,” and “Although precision agriculture technologies have been available since the 1990s, only 27 percent of US farms or ranches utilize such technology.”

  • Recommendations: “Direct USDA to better utilize AI in program delivery” and “Continue to review the application of the Commodity Futures Trading Commission (CFTC)’s principles-based framework to ensure it captures unique risks posed by AI in financial markets.”

Healthcare

  • Findings: “The lack of ubiquitous, uniform standards for medical data and algorithms impedes system interoperability and data sharing,” and “One critical area for improved guidance or regulation is industry post-market surveillance and self-validation of health AI tools.”

  • Recommendations: “Create incentives and guidance to encourage risk management of AI technologies in healthcare across various deployment conditions to support AI adoption and improve privacy, enhance security, and prevent disparate health outcomes,” “Support the development of standards for liability related to AI issues,” “Support appropriate payment mechanisms without stifling innovation,” and “Congress should explore whether new laws are necessary to support FDA’s post-market evaluation process of health AI tools.

Financial services

  • Findings: “AI technologies are already deployed across the financial services sector,” “Some regulators use AI to identify non-compliance with regulations,” and “Small financial services firms can be at a disadvantage in AI adoption.”

  • Recommendations: “Encourage and resource regulators to increase their expertise with AI,” “Maintain consumer and investor protections in the use of AI in the financial services and housing sectors,” “Consider the merits of regulatory ‘sandboxes’ that could allow regulators to experiment with AI applications,” and “Ensure that regulations do not impede small firms from adopting AI tools.”

DLA Piper is here to help

As part of the Financial Times’ 2023 North America Innovative Lawyer awards, DLA Piper has been shortlisted for an Innovative Lawyers in Technology award for its AI and Data Analytics practice.

DLA Piper’s AI policy team in Washington DC is led by the Founding Director of the Senate Artificial Intelligence Caucus.

DLA Piper is a founding member of the US AI Safety Institute Consortium (AISIC), a first-of-its-kind group housed under the National Institute of Standards and Technology (NIST).

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI Chatroom Series.

For further information or if you have any questions, please contact any of the authors.

Print