The rise of artificial intelligence has invaded various sectors, and politics is no exception. From data analysis to personalized campaigning, AI's presence is rapidly growing in the political arena. Its ability to generate realistic images, videos, and voices swiftly is reshaping how campaigns communicate with voters. However, this technological advancement also brings a new set of challenges, especially in maintaining authenticity and transparency in political discourse.
We need action now to protect local elections. The New York City Campaign Finance Board should put in place rules that require all campaigns to clearly label image, video and audio content generated using AI tools.” — CityandStateNY.com Opinion: New York City must lead boldly on AI
AI's potential in political campaigns is vast. It can help in understanding voter preferences, targeting campaign messages more effectively, and even in drafting responses to policy questions. But this power comes with potential pitfalls. The ability to create deepfakes or manipulate media content raises concerns about misinformation and the erosion of trust in political communication in a time when that trust is at a low point; especially after the rampant misinformation issues in the 2016 presidential election. While some states, like California, Michigan, Minnesota, Texas and Washington, have enacted laws requiring disclosure for AI-generated media and political ads, the question of how to regulate this technology effectively remains open. Companies like Meta and Google have also moved to add transparency when it comes to AI content in the political ad space. There's a fine line between using AI as a tool for efficiency and allowing it to become a means for deception.
The debate extends to whether political candidates should rely solely on their authentic voice to engage with the electorate. On one hand, AI tools can provide valuable assistance, especially for those facing language barriers. This week Susan Zhuang, a New York City council member-elect who used AI to aid her communication cited using it due to English not being her first language. On the other hand, over-reliance on AI might lead to a disconnect between candidates and their constituents, potentially diluting the personal touch and authenticity expected in political communication. This situation invites a discussion on the ethical implications of AI usage in politics and the responsibility of candidates to maintain transparency.
As AI becomes more ubiquitous in political campaigns, it's essential to consider the need for regulations to safeguard against misuse and ensure transparency. Should political campaigns be mandated to label AI-generated content, thereby maintaining a clear distinction between human and machine communication? Implementing such rules could be challenging, but it might be necessary to preserve the integrity of political discourse and prevent the spread of misinformation. How exactly would that implementation work? What should be labeled and what doesn’t need to be? AI voice being used to make opposition appear to say something they didn’t say would be an obvious target for labeling. What about website contents, campaign brochures, or press releases though? As we navigate this new frontier, it's crucial to find a balance that leverages AI's benefits while mitigating its risks. These are questions we asked in this episode of Nuance.
Should political campaigns be required to label multimedia content generated by AI as such?
Nuance with Mike Scala and Jay Carter is a weekly video podcast that engages its audience through examination of current events from the unique perspectives of its hosts and guests.
→ SUBSCRIBE TO Nuance with Mike Scala and Jay Carter ←
Subscribe: Nuance with Mike Scala and Jay Carter
The podcast is available on Spotify and all major podcast outlets.