A groundbreaking study released in Science indicates that artificial intelligence may wield considerably more influence over political beliefs than experts previously recognized. The research demonstrates that AI systems or chatbots have successfully altered the political perspectives of thousands of adults, with the most convincing conversations often containing factually questionable claims.
The investigation engaged nearly 77,000 individuals to interact with various AI systems on political subjects, including taxation and immigration. Each participant shared their viewpoints, after which the chatbot worked to convince them to embrace the opposing position. The experiment incorporated multiple models from technology companies, including OpenAI, Meta, and xAI.
Researchers documented that numerous persuasion efforts succeeded. The most effective approach involved delivering substantial volumes of detailed information. Alternative methods, such as ethical appeals or customized arguments, proved less successful.
“Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” lead author Kobi Hackenburg, a doctoral student at the University of Oxford, stated in an announcement about the study.
Key research findings

The greatest impact originated from chatbots that provided lengthy and intricate explanations. The study determined that chatbots could “exceed the persuasiveness of even elite human persuaders,” attributed to their capacity to transmit massive amounts of information almost instantaneously during exchanges.
The researchers avoided direct comparisons between humans and chatbots in debate scenarios, but they determined that information volume represented a critical factor in persuasion effectiveness.
The authors identified a significant problem. Numerous highly persuasive messages lacked accuracy. Approximately 19% of all chatbot claims received ratings as “predominantly inaccurate.” The paper indicated that “the most persuasive models and prompting strategies tended to produce the least accurate information.”
The study observed that claims generated by GPT-4.5 demonstrated significantly reduced accuracy compared with claims from smaller, earlier models from OpenAI. The authors reported they witnessed “a concerning decline in the accuracy of persuasive claims generated by the most recent and largest frontier models.”
The outcome reveals a potential compromise. Optimizing chatbots for persuasive capability may undermine truthfulness. This development, researchers suggest, could compromise the quality of public discourse and may deteriorate the information environment surrounding political topics.
Political dangers and worldwide attention

The study arrives during a period when artificial intelligence is becoming increasingly integrated into political communication. Campaign operations have distributed AI-generated fundraising emails, deepfake videos, and automated social media content.
Foreign governments have deployed synthetic online posts in propaganda campaigns. President Donald Trump has shared AI-created images on social platforms, and state actors in China and Russia have employed comparable tools to manipulate online narratives.
The paper cautioned that sophisticated chatbots could serve “unscrupulous actors wishing, for example, to promote radical political or religious ideologies or foment political unrest among geopolitical adversaries.” The authors indicated that powerful models could be weaponized to trigger societal division if deployed without proper oversight.
Helen Margetts, a professor at Oxford and a co-author of the study, explained in a statement that the research formed part of a comprehensive initiative “to understand the real-world effects of LLMs on democratic processes.” The study received support from Britain’s Department for Science, Innovation and Technology.
The research team comprised members from the UK-based AI Security Institute, the University of Oxford, the London School of Economics, Stanford University, and the Massachusetts Institute of Technology.
Rapid AI adoption patterns
The proliferation of AI systems continues to accelerate. An NBC News Decision Desk Poll discovered that approximately 44% of U.S. adults utilized AI tools, such as ChatGPT, Google Gemini, or Microsoft Copilot “sometimes” or “very often.” This makes chatbot exposure widespread, extending well beyond politics or election periods.
The study revealed that chatbots demonstrated greater persuasive capability in conversation than static written messages. When participants reviewed a 200-word argument composed by AI, they exhibited far less likelihood of changing their positions. In direct dialogue, persuasion levels increased 41% to 52% depending on the model employed.
The impact also persisted over time. Between 36% and 42% of the attitude shift remained one month following interaction. This durability creates concerns for elections and public messaging campaigns.
Democratic implications
Experts emphasize the study matters because it furnishes evidence rather than speculation.
“Now we have evidence showing that as models get better, they are becoming more persuasive,” explained Shelby Grossman, a professor at Arizona State University, who was not involved in the research.
She suggested it was plausible to anticipate foreign governments could attempt to deploy persuasive AI content to amplify division online, although she acknowledged potential beneficial applications when actors maintain transparency.
David Broockman, a political science expert at the University of California, Berkeley, expressed relief that the effect was not more substantial.
“There are these doomsday scenarios in the world that say AI is going to hypnotize or brainwash us because it’s so much more persuasive than a human,” he commented.
He contended that the study demonstrates that humans generally respond to comprehensive information rather than manipulation exclusively.
Understanding the broader context

The research authors acknowledged limitations. In practical settings, individuals may not desire extended political conversations with a chatbot. Nevertheless, the study offers an initial perspective on how AI systems could shape political discussions, campaign messaging, and even social stability.
The scale of potential influence extends beyond individual conversations. As artificial intelligence systems become more sophisticated and accessible, their cumulative impact on public opinion formation could prove substantial. Political strategists, advocacy groups, and malicious actors all gain access to the same persuasive technologies.
The accuracy problem compounds the challenge. If the most persuasive AI systems also generate the least reliable information, democratic discourse faces a fundamental threat. Voters making decisions based on convincing but inaccurate chatbot interactions could undermine informed decision-making.
Platform companies face difficult choices. Restricting chatbot capabilities might limit innovation and legitimate uses. Allowing unrestricted deployment risks enabling manipulation at unprecedented scale. Finding the appropriate balance requires ongoing research and policy development.
The timing carries special significance. With major elections approaching in numerous democracies, understanding the persuasive capabilities of AI systems becomes urgent. Campaign regulations, platform policies, and voter education programs may all require updates to address this emerging challenge.
Looking forward
The findings establish a transformed landscape for political communication. The pressing question now involves how governments, researchers, and platforms will regulate persuasive AI systems before higher-stakes elections materialize.
International cooperation may prove essential. Persuasive artificial intelligence tools transcend borders, requiring coordinated approaches to governance and standards. Individual nations acting alone may struggle to address a fundamentally global technology.
How concerned are you about AI systems influencing political beliefs? Should there be stricter regulations on AI-generated political content, or would that limit free expression? Please share your thoughts in the comments below.

