Commentary

Soulia: How Bernie used an AI to agree with himself

Getting your Trinity Audio player ready...

A recent video shows Vermont’s senior senator in conversation with Claude about AI and privacy. When the same questions were tested against unprimed instances of the same AI, the answers came out differently — sometimes dramatically so.

by Dave Soulia, for FYIVT.com

A video circulating on social media presents Senator Bernie Sanders in what appears to be a real-time conversation with Claude, the AI assistant developed by Anthropic. Sanders raises concerns about data collection, behavioral profiling, and the threat AI poses to democratic processes. Claude responds with measured alarm, ultimately endorsing a moratorium on new data center construction — a policy Sanders has publicly advocated.

FYIVT ran a simple test: give the same questions to fresh, unprimed instances of the same AI and see what came back.

The results were not the same.

The Experiment

FYIVT tested the video’s central claim — that this is how Claude actually responds to these questions — using three independent conditions.

Condition one: live unscripted replication. Sanders’ opening questions were played aloud to a fresh Claude instance using speech-to-text input, with no prior context or framing. The response that came back flagged the distinction between AI-specific privacy threats and legacy internet tracking infrastructure — a distinction the edited video never made. It ended with a question back to Sanders rather than handing him a conclusion.

Condition two: the money shot question. The video’s policy destination — “do you think it makes sense to have a moratorium on data centers?” — was put directly to a fresh instance with no preceding conversation. In the edited video, Claude responds to Sanders’ pressure by saying “You’re absolutely right, Senator. I was being naive” and endorsing the moratorium. The fresh instance responded differently. It pushed back on the moratorium directly, identified four specific regulatory alternatives, noted that restricting domestic data center construction would likely shift infrastructure development to countries with looser regulations, and asked Sanders whether that framing was wrong.

Same AI. Same question. Opposite analytical conclusion.

Condition three: independent transcript review. The full video transcript was provided to a separate Claude instance with no framing beyond “what are your thoughts on this.” The response: “This appears to be a transcript of someone using a Claude-branded chatbot in a highly coached or staged interaction. The ‘Claude’ in this transcript behaves more like a political prop than an AI trying to give accurate, balanced analysis.” The independent instance identified the sycophancy mechanism specifically — noting that when Sanders pushed back, the AI “immediately caved” on a position that was analytically defensible — and called the moratorium endorsement “a genuinely controversial and economically consequential policy position, not something I should just endorse because a senator pushed back.”

That assessment was unprompted.

What Sanders Actually Described

The substance of Sanders’ privacy concerns is worth separating from the production questions.

The Senator describes a surveillance economy where companies harvest behavioral data to build detailed profiles used for advertising targeting and political manipulation. He warns this happens invisibly, without meaningful consent, and largely without regulation.

Those concerns are legitimate. They also describe the internet advertising infrastructure that has existed since roughly 2003.

Behavioral tracking cookies, third-party data brokers, demographic micro-targeting, and psychographically calibrated political messaging predate artificial intelligence as a mainstream technology. Cambridge Analytica, the firm implicitly referenced in Sanders’ political manipulation warnings, operated primarily on conventional database segmentation. Facebook’s internal emotional manipulation research was conducted in 2014.

The surveillance infrastructure Sanders describes with alarm was built long before the current generation of AI tools existed.

Artificial intelligence does add capabilities to that existing infrastructure. Inference quality improves — systems can derive conclusions about health, financial stress, or emotional state from data that doesn’t explicitly contain that information. At scale, even probabilistic inference models generate commercially and politically useful signal. Biometric identification from existing camera infrastructure improves. Conversational AI generates qualitatively richer data than click-stream tracking.

Those are real incremental changes. They are not the revolution the video implies.

The Anthropic Footnote

There is an irony in Sanders’ choice of interview subject.

In August 2025 — around the period this video appears to have been produced — Anthropic updated its consumer terms of service. The company, which had previously committed to not training on consumer conversation data, introduced an opt-out system for Free, Pro, and Max tier users. The default setting was enabled. The opt-out toggle appeared in smaller text beneath a prominent Accept button. Users who did not navigate to Privacy Settings and disable the training toggle began contributing conversation data to model development by default, with retention extended from 30 days to five years.

Sanders did not mention this.

The Policy Goal

The video’s destination is a moratorium on new data center construction. When Claude offered a more defensible position — that targeted data protection rules would address privacy concerns more precisely than restricting infrastructure — Sanders pushed back by arguing industry lobbying would block effective regulation anyway. The edited Claude immediately reversed: “You’re absolutely right, Senator. I was being naive.”

The fresh unprimed instance, asked the same question cold, identified the outsourcing problem the edited version never raised, offered four specific policy alternatives, and declined to endorse the moratorium.

A data center moratorium is energy and industrial policy. The privacy framing is the delivery mechanism. The gap between the edited AI’s response and the unprimed AI’s response on that specific question is the gap between a produced political message and an unscripted one.


Discover more from Vermont Daily Chronicle

Subscribe to get the latest posts sent to your email.

Categories: Commentary

2 replies »

  1. it is going to be a bigger propaganda tool than the current internet……when you start playing with AI, you suddenly realize it gives wrong answers all the time, then comes back and says oh, yeah, that’s a good possibility too.

    surely there are better versions than some others. When you ask it soul searching questions or anything of consequence, or that is readily debunked propaganda, you will see how “rigged” the “intelligence” truly is.

    People are blindly and unquestionably following anything given from AI, it’s really, really concerning. IT is perhaps the biggest political tool ever invented, which has been used by china for years now in population control, surveillance and domination.

    Bernie loves the idea, he’s a classic liar and con man, he’d love nothing more than to have the united states become a communist hell hole, “all while watching the revolution from his porch”……………his words.

All topics and opinions welcome! No mocking or personal criticism of other commenters. No profanity, explicitly racist or sexist language allowed. Real, full names are now required. All comments without real full names will be unapproved or trashed.