AI in Advertising: A Regulatory Lookahead for 2026
The use of artificial intelligence in advertising has accelerated rapidly, with AI-generated content now featuring prominently in campaigns across every sector. As explored in my earlier article, AI in Advertising: Balancing Innovation and Integrity, this presents both significant opportunities and complex challenges. 2026 marks a pivotal moment in the regulatory landscape, as legislators and regulators in the UK and EU respond to the rapid adoption of AI with new transparency obligations, disclosure requirements, and enforcement priorities. For advertisers and brands, understanding these developments is essential to navigating the year ahead.
The EU AI Act: Implications for UK Advertisers
The EU AI Act remains highly relevant for UK advertisers targeting European consumers. The most significant EU regulatory development is scheduled to arrive on 2 August 2026, when the bulk of the EU AI Act is due to come into force (although there are proposals that some provisions should now be delayed further). Article 50 is particularly relevant for advertisers, as its transparency obligations will fundamentally change how AI-generated advertising content must be identified and disclosed for campaigns reaching EU audiences. For advertisers leveraging AI to create campaign materials, this means embedding technical markers such as watermarks or metadata that identify content as AI-generated.
Any content constituting a "deepfake"-that is, image, audio, or video content that appears to depict real persons doing or saying things they did not actually do or say-must be disclosed as having been artificially generated or manipulated. Similarly, AI-generated text published to inform the public on matters of public interest must be labelled as such. Those using emotion recognition or biometric categorisation systems in advertising contexts must inform individuals exposed to such systems of their operation.
The European Commission has been developing a voluntary Code of Practice on the marking and labelling of AI-generated content , with the first draft published in December 2025. The final version is expected by June 2026, providing practical guidance on watermarking standards, metadata requirements, and user-facing disclosure mechanisms. UK advertisers with cross-border campaigns would be well advised to monitor these developments closely and consider adopting compliant practices as a matter of best practice, even for purely domestic campaigns.
UK Advertising Standards Authority: Proactive Enforcement and AI Scrutiny
Whilst the UK Advertising Codes do not yet contain AI-specific rules, the Committee of Advertising Practice (CAP) has made clear that existing rules apply regardless of how advertising content is generated. The fundamental principles remain: advertisements must not mislead by inaccuracy, ambiguity, exaggeration, omission, or otherwise. AI-generated content that misrepresents products or creates impressions that consumers would not form if they knew the content was AI-generated could fall foul of these regulations.
A significant shift in the ASA's approach deserves particular attention. The ASA's Active Ad Monitoring System (AAMS), an AI-powered tool for scanning online advertising, is expected to review 40 million advertisements in 2026 and has already shown success in certain sectors as explored by my colleague Evie O’Connor here. This represents a marked transition from what has been historically a reactive, complaint-based enforcement to proactive intervention.
The ASA's Chief Executive has signalled that advertisements both for and using AI technology on the regulator's radar, making it clear that advertisers using AI will be held responsible for the content that AI produces. Further guidance from CAP on the use of AI in advertising, or enforcement action concerning advertisements that mislead consumers about AI involvement, can reasonably be expected during 2026.
For advertisers using AI to create content, CAP advises asking whether the audience would be misled if the use of AI is not disclosed and, where there is a risk of misleading consumers, whether disclosure would clarify or contradict the advertisement's overall message. Disclosure cannot remedy fundamentally deceptive messaging, and particular caution is warranted around the use of deepfakes and other AI technologies that could potentially mislead viewers.
Platform Disclosure Policies
Social media platforms have established their own AI disclosure requirements that operate alongside regulatory obligations. Meta regulates the labelling of digitally generated or altered photorealistic video or realistic-sounding audio, with advertisements using Meta's generative AI features automatically labelled with "AI info." TikTok requires disclosure for realistic AI-generated images, video, and audio, and participates in the Coalition for Content Provenance and Authenticity (C2PA) to detect and label AI content through Content Credentials. YouTube mandates disclosure for content that is meaningfully altered or synthetically generated and appears realistic.
These platform policies represent an important layer of compliance for advertisers operating across social channels. Non-compliance may result in content removal, account restrictions, or reduced distribution-consequences that can prove commercially significant regardless of any formal regulatory action. Advertisers utilising these channels should monitor platform policy updates and ensure ongoing compliance.
Industry Self-Regulation
The advertising industry is not waiting for regulators to act. The Interactive Advertising Bureau (IAB) has released its first AI transparency and disclosure framework, providing unified guidance for responsible disclosure of AI usage by advertisers, agencies, and platforms. The ICC Advertising and Marketing Communications Code, updated in 2024, now explicitly addresses AI, establishing that responsibility for marketing communications extends to those developing algorithms and AI for marketing purposes, and that rules apply regardless of whether content is created by humans or AI. For UK advertisers, the CAP and BCAP Codes remain the primary self-regulatory framework, and it is reasonable to expect that AI-specific guidance will be incorporated into these Codes in due course.
These self-regulatory initiatives offer useful frameworks for compliance but should be viewed as a floor rather than a ceiling. Advertisers would be wise to adopt practices that exceed minimum requirements, given the pace of regulatory development and the reputational risks associated with consumer backlash against undisclosed AI content.
Looking Forward
The integration of AI into advertising campaigns offers unparalleled creative and commercial possibilities. Yet it also presents a host of legal and ethical challenges that must be carefully managed. The regulatory landscape in 2026 will demand that advertisers establish robust processes for identifying AI-generated content, implementing marking requirements, making appropriate disclosures, and conducting due diligence on AI providers and training data.
The advertising world should work to establish responsible AI practices that respect both consumer rights and the rule of law. For campaigns reaching EU consumers, penalties for non-compliance under the EU AI Act can reach up to €15 million or 3% of global annual turnover, whichever is higher. In the UK, whilst the ASA cannot impose fines directly, its enforcement mechanisms (including referral to Trading Standards, denial of advertising space, and reputational damage through published rulings) remain powerful incentives for compliance.
Only by taking a proactive approach to compliance can we ensure that AI serves to enhance, rather than undermine, the art of advertising. The brands that navigate this transition successfully will be those that embrace transparency not as a burden but as a positive foundation for consumer trust.