15 April 20249 minute read

AI and Advertising – Emerging Themes

As is clear from the data, (see Statistica report December 2023), marketers are prioritizing AI investment this year and the majority are already using AI. This is consistent with WPP’s announcement that it plans to spend GBP250 million in an AI push.

Last week saw details released of brands successful in securing Super Bowl ad slots. With price tags for premium ads slots such as these (USD7 million price tag for a 30-second spot at the Super Bowl), the attractiveness of the cost and time savings of using AI in creating high-quality assets, in the ideation process as well in optimising deployment of ads is unsurprising.

These efficiencies cut both ways and we are therefore seeing regulators using AI tools to proactively search for online ads which might break the rules. Indeed, the new 5 year strategy for 2024-2028 of the ASA (the primary advertising regulator in the UK) is entitled “AI-assisted collective ad regulation”.

As with any new technology, the use of AI in advertising poses various pitfalls that advertisers increasingly need to navigate. We explore some key themes here:

 

AI-generated or assisted content in advertising

To date, few jurisdictions have brought in advertising laws or regulations specifically addressing AI-generated or assisted content, though this may change in future. As such, the starting point is that general rules applying to all advertising content will apply to content generated by or with assistance of AI. Even though AI can provide a shortcut, the same review processes applied to traditional content creation need to be followed.

In the UK, in August 2023 the ASA reiterated the CAP Code is media-neutral, with a focus on how consumers interpret ads as opposed to the role AI played in their creation.

Some non-government industry bodies have suggested guidance: for example, in November 2023 the IPA and ISBA announced twelve guiding principles for advertisers on the use of generative AI in advertising, including that AI should be used ethically, transparently, and not in a manner likely to undermine public trust in advertising.

In the US in 2023, the FTC issued various comments regarding use of AI in advertising, highlighting it as an increasing concern.

Advertisers should pay particular attention to the risk that AI-generated or assisted advertising content could mislead consumers. It is well-known that AI can generate content that appears real, but which is unreal, fictitious or otherwise incorrect or exaggerates the claims being made. This includes content depicting events occurring or people speaking, acting or presenting in ways which did not occur in reality ("lookalike", "soundalike" and "deepfake" content). For example, if AI produced images purporting to show the results gained from using a cosmetics product, viewers could be misled in the same way that photoshopped or filtered social media posts or images of products would mislead as to the efficacy of a product.

Also, widely acknowledged (including by the US FTC) is generative AI’s ability to inadvertently produce content that perpetuates biased or harmful messaging regarding gender, race, body image and the like. This is often unforeseen and unintentional, and a result of the relatively opaque internal workings of the algorithms and/or neural networks underpinning the AI. However, the reputational impact of releasing biased and inaccurate messaging could be significant. Whilst not an advertisement as such, last year, an online news outlet released a series of AI generated images of a product seeking to make a local version for every country. However, this resulted in huge public backlash due to cultural inaccuracies that were depicted, such as depicting one country by reference to the carrying of weapons. In this way, AI generated ads could fall foul of regulations regarding harm and offence, and special care must be taken; for example in the UK, there are specific rules against negative stereotyping based on gender.

 
AI assistance with campaign promotion or distribution

We are seeing increasing use of AI in ad deployment including dynamic creation of content to target different audiences, as well as by using algorithmic insights to maximise consumer engagement and conversion. For example, Google Ads Performance Max has AI-powered features such as dynamic adjustments to the timing of ads, the target audiences, as well as to headlines, descriptions and ad copy for tailoring to specific users.

Such features can result in more conversions, but they can also mean that advertisers are not in control at all times of what content appears, and to who and how it appears. However regulators, of course, continue to assess ads based on how they are interpreted by viewers, regardless of the advertiser’s intention.

In the UK, a 2023 decision of the ASA concerned an image showing a model with an unbuttoned top exposing a bare chest. The ASA determined that the ad objectified the model and was therefore irresponsible and likely to cause serious offence. Although AI played a role in the production of the ad, the image in question had been selected by the marketing agency from images supplied by the brand. The ASA confirmed that regardless of how the ad was produced or distributed, it was advertisers who were primarily responsible for ensuring that their ads were compliant.

Similarly, a decision from late last year concerned an ad for an online marketplace, featuring various images in a row including (i) a young girl in a bikini, (ii) a metallic facial roller, (iii) a woman in a crop top, (iv) a jock strap, (v) a balloon-tying tool and (vi) a woman’s torso in a halter-neck dress. Despite the advertiser’s claim that the ads were deployed by an AI functionality which used an algorithm to pick products from over one million images uploaded, the complaints were upheld. The ASA confirmed that regardless of the mechanics behind the ad, taken in its entirety, with no explanation or labelling, the products were likely seen in a sexual nature and since they appeared in general media and were untargeted, were likely to cause widespread offence.

In this context, algorithmic bias is again an issue and one in which we expect growing focus from regulators. In November 2023, a California court ruled that a social media ad-targeting system, in which ads targeted users in accordance with their age, gender, and other protected categories, violated California anti-discrimination law. The case complaint involved advertising for life insurance policies which targeted younger and older platform users differently. While this case occurred in the specific context of anti-discrimination law, it nonetheless highlights how algorithmic deployment of advertising can have inadvertent and undesirable outcomes.

 

"AI-washing"

The explosion in interest in generative AI over the last year has given rise to the phenomenon of "AI-washing", in which (like "greenwashing") claims made by businesses regarding the use of AI in their products or services may mislead the public. Typical is a claim exaggerating the use of AI in a particular software solution to imply that the underlying technology is more sophisticated than the reality.

Naturally, "AI-washing" risks breaching general advertising rules regarding misleading advertising. It is an area in which we foresee heightened regulatory attention in 2024; in the US, the FTC and the SEC have both commented on the issue.

 

Labelling of AI-generated content

The proliferation of AI-generated or assisted advertising content has led to calls for such content to be clearly marked or labelled to consumers, as part of overall transparency commitments. There have also been calls to use AI-related certification or trust marks, to indicate the extent of the role AI played in certain content (from merely "touched up" to entirely generated). However, no major consensus on such labelling has yet emerged, and at least in the UK, it appears that regulation on this is not imminent.

Currently, few jurisdictions worldwide have AI specific obligations to mark or label AI-generated content, but the existing rules e.g. in the UK around misleading advertising still need to be considered. While China obliges generative AI service providers to add tags on images, videos, and other content generated by generative artificial intelligence services, this obligation applies more to AI firms than advertisers, and does not encompass all AI-generated advertising content as a general measure.

A concern for labelling has particularly arisen in the context of AI-generated influencers, which are increasingly successful and some of whom consumers have mistaken for real humans. In the UK, the ASA has acknowledged the phenomenon but confirmed there is no obligation for now requiring notification to users that such influencers are AI-generated. But again, rules around labeling and transparency around influencer marketing still apply. Some platforms, including Meta and Google, are looking to introduce obligations to notify users when content is AI-generated in certain contexts, for example election advertising and advertising relating to social or political issues. However, this is far from a general obligation on all AI-generated content, and reflects the heightened sensitivity and consideration given to political advertising even when AI is not involved.

 

Takeaways

Although few jurisdictions have specific regulations for AI-generated content, from an advertising law perspective (and putting aside any IP issues, which are beyond the scope of this article) advertisers should still be wary of how they use AI content, especially given the risks of misleading content and bias.

Campaigns deployed with AI assistance also require comprehensive and ongoing monitoring, especially given the risk that dynamic targeting of certain audiences can have unintended consequences. Regulators are increasingly making AI a focus and appeals to the sometimes "unknowable" nature of AI technology is unlikely to be accepted as an excuse for non-compliance. Further, while platform tools can support with compliance, in the event of regulatory scrutiny, what will matter is the overall impact of the ad message and how audiences interpret it.

In the UK, the risk profile of non-compliance looks set to grow. The reputational damage from upheld complaints is often considered the most persuasive enforcement power used by the ASA, however they also have other powers from requesting removal of paid-search ads direct to search engines, all the way to referring advertisers to Trading Standards, who can instigate criminal prosecutions. These powers will expand if and when the Digital Markets, Competition and Consumer Bill, first introduced in April 2023, and currently being considered by Parliament, becomes law. In particular, the Bill proposes to give the UK’s Competition and Markets Authority (CMA) powers to impose fines and other measures on market actors directly, without the need to go through the courts.

If you have further questions on this topic, please contact the authors or your normal DLA Piper contact.

Print