FCC Proposes Mandatory Disclosures for AI-Generated Political Ads
In response to the increasing use of artificial intelligence (AI) in political advertising, the Federal Communications Commission (FCC) has proposed new regulations aimed at ensuring transparency in the upcoming 2024 elections. This move is driven by concerns over the potential misuse of AI-generated content to mislead voters.
FCC Chairwoman Jessica Rosenworcel announced the proposal, emphasizing the need for transparency in political advertisements that utilize AI. If adopted, the rules would require political advertisers on broadcast television, radio, cable, and satellite to disclose when their content includes AI-generated elements. This initiative marks a significant step towards addressing the growing influence of AI in political communications.
The proposed regulations focus on traditional media platforms such as TV, radio, and cable. These platforms fall under the FCC’s jurisdiction as per the 2002 Bipartisan Campaign Reform Act. The proposal does not extend to digital and streaming platforms, which have seen significant growth in political advertising. As Rosenworcel highlighted, this limitation underscores the need for broader regulatory measures that encompass all media channels, including social media and streaming services.
The FCC’s proposal includes both on-air and written disclosures, ensuring that viewers and listeners are explicitly informed about the use of AI in political ads. This approach aims to combat the deceptive potential of AI-generated “deepfakes” – highly realistic but fabricated audio, video, or images that can misrepresent individuals or events.
The FCC’s concern is not unfounded. In February, the commission banned AI-generated robocalls after a deepfake of President Joe Biden was used to mislead voters in New Hampshire. This incident highlighted the urgent need for regulations to prevent AI from being weaponized in political contexts.
Moreover, AI-generated content has already been deployed in political campaigns. The Republican National Committee, for example, released an AI-generated ad last year depicting a dystopian future under another Biden administration. This ad featured fabricated but realistic scenes designed to evoke fear and uncertainty among voters.
The proposal has garnered support from various quarters, including lawmakers and advocacy groups. Senator Amy Klobuchar (D-MN) and Senator Lisa Murkowski (R-AK) introduced the AI Transparency in Elections Act, which mandates disclaimers on political ads that include AI-generated content. This bipartisan effort aims to extend transparency requirements beyond the FCC’s jurisdiction to include online platforms.
Industry players are also taking steps to address AI in political ads. Meta, the parent company of Facebook, Instagram, and Threads, announced new restrictions on AI-generated ads starting in 2024. These rules require advertisers to disclose if their content includes photorealistic images, videos, or audio created or altered by AI.
While the FCC’s proposal represents a crucial step toward regulating AI in political advertising, significant challenges remain. Defining what constitutes AI-generated content is a complex task, given the rapid advancements in AI technologies. The FCC’s rulemaking process will need to address these nuances to create effective and enforceable regulations.
Furthermore, the proposal highlights the limited scope of the FCC’s authority, which does not extend to digital and streaming platforms. This gap underscores the need for comprehensive legislative action to ensure all forms of media are covered under transparency regulations. As generative AI becomes more accessible and sophisticated, the potential for misuse in political contexts will only increase, necessitating robust safeguards.