Fake Biden robocall ‘tip of the iceberg’ for AI election misinformation

A digitally altered message created to sound like President Biden urging New Hampshire residents not to vote Tuesday’s primary added fuel to calls for regulation of artificial intelligence (AI) as the 2024 campaign heats up.

ADVERTISEMENT

Experts warn of AI election misinformation

A robocall manipulated to sound like President Biden has raised concerns about the use of artificial intelligence (AI) in elections. The call, which urged New Hampshire residents not to vote in the primary, is the latest example of how AI can create false information and impersonate candidates. Experts warn that as AI technology becomes more advanced and accessible, the potential for AI election misinformation to confuse and deceive voters increases.

Professor Kathleen Carley from Carnegie Mellon University stated that the robocall is just the beginning of what could be done to suppress votes and attack election workers using AI. Samir Jain, the vice president of policy at the Center for Democracy and Technology (CDT), highlighted the risks of deceptive AI election content, including spreading false information about voting.

Concerns over targeted misinformation

Apart from manipulating a candidate's likeness, the advancements in AI also raise concerns about the targeted dissemination of misinformation. Carley explained that AI models have the ability to generate content tailored to specific audience preferences, just like how AI can produce songs in the style of certain genres or artists. This means that misleading stories can be told in different ways to different groups of people, further exacerbating the potential for divisive AI-generated election content.

Robert Weissman, president of nonprofit watchdog group Public Citizen, emphasized the urgency for action in addressing the spread of deepfakes, stating that political operatives will continue to exploit AI technology without stricter regulations. Public Citizen has petitioned the Federal Election Commission (FEC) to amend its rules regarding fraudulent misrepresentation by AI in campaigns, but progress has been slow.

Calls for regulation and action

Various entities, including lawmakers and private companies, are calling for regulations and actions to combat AI election misinformation. Nonprofit organization Public Citizen has urged the FEC to issue clearer rules regarding deceptive AI content, while Senator Amy Klobuchar has introduced a bipartisan bill that seeks to prohibit the distribution of materially deceptive AI-generated content in political ads.

Meanwhile, states such as California, Michigan, and Washington have enacted or proposed laws to regulate deepfakes in elections, and major social media and AI companies have implemented policies to address the issue. However, experts stress the importance of not just having policies in place, but also enforcing them effectively. While progress is being made, there is still a need for comprehensive and timely action to mitigate the risks posed by AI election misinformation.