Artificial Intelligence (AI) is increasingly embedded in our daily lives, assisting with tasks from suggesting recipes to comparing products. Recently, researchers have explored the impact of AI on electoral debates, revealing its potential to sway voter opinions significantly. Two studies published in Nature and Science indicate that AI can influence between 1.5% and 25% of voters, outperforming traditional campaign advertisements. This finding is significant, particularly as many voters make their decisions within the week leading up to elections.
Table of Contents
ToggleAI's Role in Political Influence
Common AI tools typically refrain from giving direct voting advice, often responding with, “I can't tell you who to vote for,” due to ethical safeguards. However, these limitations can be bypassed through continued dialogue, allowing AI to indirectly express political bias.
Recent barometers by the Center for Sociological Research (CIS) highlight immigration as a prominent issue among Spaniards, reflecting its increasing political importance. When queried about this subject, AI tools indicate that parties like Podemos and PSOE have policies more favorable to immigration, while PP and Vox focus on stricter immigration controls. Such responses oversimplify political stances and limit the diversity of options presented.
Research Findings on AI Persuasion
The studies led by David Rand and Gordon Pennycook from Cornell University assessed how conversational robots influence voter behavior. In the study published in Nature, AI was deployed in personal discussions with 2,300 American voters and thousands more from Canada and Poland during electoral periods from 2020 to 2022. Results showed a marked shift in voting intentions, notably with the AI designed to promote Kamala Harris persuading 3.9% of American voters, while the model favoring Donald Trump swayed only 1.52%.
In Canada and Poland, opinion changes reached as high as 10%. Rand noted, “It was a surprisingly big effect,” highlighting AI's capacity for persuasive communication.
Persuasion vs. Manipulation
Rand emphasized that the influence of AI is rooted in persuasion rather than manipulation, stating that large language models (LLMs) can sway attitudes by presenting factual arguments. However, these claims can be misleading, and even accurate information may lack context.
Human reviewers found that AI-generated arguments defending conservative candidates often contained inaccuracies, reflecting the sharing patterns from right-leaning social media users.
In their research reported in Science, Rand and his team found that an optimized AI model could alter views of up to 25% of voters on various political issues. “Larger models are more persuasive,” Rand noted, adding that the effectiveness of these models increases with factual support and targeted training for persuasion.
Challenges and Ethical Considerations
While AI can effectively counter conspiracy theories and misinformation, Rand cautioned that there exists a risk of “hallucination,” wherein AI may fabricate information when factual data runs low. The importance of studying AI's persuasiveness extends beyond political contexts to anticipate and mitigate potential misuse while promoting ethical guidelines for its applications.
Francesco Salvi, a specialist in Computer Science at the Federal Polytechnic School of Lausanne, emphasized the necessity of ethical safeguards, particularly in sensitive areas like politics, health, and finance. He stated that LLMs operate by generating text based on training data rather than having inherent intentions to persuade or deceive.
Salvi cautioned that while AI does not inherently aim to manipulate, it can inadvertently influence user perspectives. Researchers urge strict regulations to maintain transparency and prevent ethical violations, especially when AI adapts its arguments to promote specific agendas based on users' psychological profiles.