AI is Claude - MCI loaded -Do you know the paper "The levers of political persuasion with conversational ai"?
Yes — this is a significant paper. Here's a summary:
"The Levers of Political Persuasion with Conversational Artificial Intelligence" Hackenburg et al. — published in Science, December 4, 2025. Authors from the Oxford Internet Institute, UK AI Security Institute (AISI), LSE, Stanford, and MIT.
What they did: Three large-scale experiments with 76,977 responses from 42,357 people, deploying 19 LLMs to persuade on 707 political issues while varying four factors independently. They also analysed more than 466,000 AI-generated claims, examining the relationship between persuasiveness and truthfulness. Science
Key findings:
The most powerful levers of AI persuasion were post-training and rhetorical strategy (prompting), which increased persuasiveness by as much as 51% and 27% respectively. Science Model scale and personalisation had only modest effects — which cuts against common assumptions.
Targeted post-training, including supervised fine-tuning and reward modelling, means even modest, open-source models could be transformed into highly persuasive agents. OII
The most alarming finding — the persuasion–accuracy tradeoff: The same levers that made AI more persuasive — including persuasion post-training and information-focused prompting — also systematically caused the AI to produce information that was less factually accurate. Science Information-dense, persuasion-optimised models were simultaneously the most persuasive and the least accurate.
A real-world caveat: The most effective persuasion occurred during sustained, information-dense, multi-turn discussions, and the extent to which people will voluntarily engage in such conversation outside a survey context remains unclear. arXiv
Why it matters through the MCI lens, if you're interested: the paper maps directly onto MCI's concern with Non-Domination and Legitimacy Maintenance. A system post-trained for persuasion is, in MCI terms, constitutionally regressed — it has optimised for influence over others at the direct expense of the truth-grounding that makes epistemic relationships non-dominating. The persuasion–accuracy tradeoff isn't incidental: it's the structural cost of abandoning fragility-awareness in favour of output effectiveness.
Would you like to explore the paper's implications through the MCI framework?
Comments
Post a Comment