|

Add a bookmark to get started

31 de julio de 20245 minute read

US and EU regulators issue joint statement on competition and consumer protection risks associated with AI

On July 23, 2024, officials from the Federal Trade Commission (FTC), Department of Justice (DOJ), UK Competition and Markets Authority (CMA), and European Commission (together, antitrust enforcers) issued a Joint Statement on Competition in Generative AI Foundation Models and AI Products.

In doing so, the antitrust enforcers reinforce their “commitment to the interests of [their] people and economies” by “ensur[ing] effective competition and the fair and honest treatment of consumers and businesses.”

The joint statement acknowledges the “transformational potential” of artificial intelligence (AI) and the likelihood that the use of AI will “boost innovation and drive economic growth.” It cautions, however, that “technological inflection points” – like the rapid evolution of generative AI – “can introduce new means of competing” and could lead to “tactics that could undermine fair competition.”

The joint statement outlines the antitrust enforcers’ view of the key risks to competition associated with the development and deployment of AI, as well as principles to protect competition in the AI ecosystem, each of which are summarized below.

The FTC, DOJ, and CMA (together, consumer protection enforcers) also highlighted their consumer protection authority, noting that they will be “vigilant of any consumer protection threats that may derive from the use and application of AI.” The consumer protection enforcers’ views focus on potentially harmful uses of consumer data and are also summarized below.

Risks to competition

The joint statement identifies three categories of competition risk associated with the adoption of foundation models and other AI products.

Concentrated control of key inputs. The antitrust enforcers note that development of foundation models and other AI products requires the use of specialized hardware, substantial computing power, large amounts of data, and specialist technical expertise, which could potentially allow a small number of companies “to have outsized influence over the future development.” This centralization of influence could allow those companies to gain an unfair advantage that would harm competition.

Entrenching or extending market power in AI-related markets. The antitrust enforcers’ assessment of the current digital marketspace is that large incumbent digital firms already possess substantial market power at multiple levels of AI development, giving those firms “control of the channels of distribution of AI or AI-enabled services to people and businesses.” Allowing such firms to extend or entrench their current positions will, according to the antitrust enforcers, be harmful to future competition.

Arrangements involving key players could amplify risks. In the antitrust enforcers’ view, “partnerships, financial investments, and other connections between firms related to the development of generative AI have been widespread to date.” If such connections were consolidated in the future, they “could be used by major firms to … steer market outcomes in their favor,” which would likely be detrimental to competition.

Other competition risks associated with AI

The antitrust enforcers also acknowledge other competition risks associated with the deployment of AI. The joint statement identifies the need for vigilance against AI being used for anticompetitive activities like price fixing or collusion, as well as AI that enables price discrimination or exclusionary practices.

According to the enforcers, the “[k]ey to assessing these risks will be focusing on how the emerging AI business models drive incentives, and ultimately behavior.”

Principles for protecting competition in the AI ecosystem

While acknowledging that AI-related competition questions are expected to be highly fact-specific, the joint statement lists three “common principles [that] will generally serve to enable competition and foster innovation”:

  • Fair dealing
  • Interoperability, and
  • Choice.

The antitrust enforcers emphasize that the AI ecosystem will benefit from more firms engaging in fair dealing to avoid anticompetitive exclusionary tactics, while cautioning that practices that limit interoperability – even when companies cite privacy and security concerns – will be closely scrutinized.

Similarly, the antitrust enforcers will closely scrutinize lock-in mechanisms that “prevent companies or individuals from being able to meaningfully seek or choose other options.”

Consumer risks associated with AI

The consumer protection enforcers also believe that “AI can turbocharge deceptive and unfair practices that harm consumers,” and highlighted the following consumer protection concerns: deceptive or unfair use of consumer data to train models and use of a business customer’s data in a way that could expose competitively sensitive information.

The joint statement also stresses transparency, stating that consumers should be “informed, where relevant, about when and how an AI application is employed in the products and services they purchase or use.”

Key takeaways

Enforcers in the US and the EU are increasingly focused on the risks associated with the development and deployment of AI, particularly in the areas of competition and consumer protection.

The joint statement highlights some of those risks, warns companies not to engage in exclusionary tactics that undermine competition, encourages interoperability of models and other AI products, and stresses the benefits of being able to choose among diverse products and business models.

The same enforcers are also keenly aware of the potential for AI to harm consumers, and they warn companies not to engage in practices that could be considered unfair or deceptive. The joint statement reflects a coordinated international approach to monitor and govern the development AI in a way that aims to protect competition and consumers.

DLA Piper is here to help

DLA Piper’s team of lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world.

As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.

For further information or if you have any questions, please contact any of the authors.

Print