How to Build (and Break) a Monopoly on Truth
How AI Is Changing the Power to Define What’s Real
Power doesn’t just come from owning resources. It comes from owning reality.
In an age of AI, algorithmic platforms, and synthetic media, truth has never been more contested—or more vulnerable. Generative models can fabricate convincing audio, video, and text. Social platforms feed users highly personalized realities. And large language models are becoming the new default interface for knowledge. What we read, what we believe, and what we remember is now shaped by a probabilistic machine trained on opaque data.
This isn't theoretical. It's happening now.
AI agents summarize our search results. Chatbots become tutors, therapists, and researchers. And with each abstraction, the source fades into the background. Truth becomes secondhand—mediated, stylized, and increasingly owned.
For most of history, truth was dictated. By kings. By churches. By single-point sources of authority that defined what could be said, thought, and remembered. The internet promised something different—a decentralized, democratized exchange of knowledge where no one could fully control the narrative.
And yet, here we are.
In a world of infinite information, truth is more fragile than ever. Social media filters what we see. Experts disagree. Algorithms manipulate. Misinformation spreads faster than facts. And the institutions we once trusted are either captured, corrupted, or irrelevant.
So, how do monopolies on truth still form in the open society?
And how do we resist them?
This article serves as a guide—not to promote control, but to reveal how it operates. Because if you want to protect freedom, you have to understand how it dies.
1. Control the Medium
Even in a free society, whoever owns the pipes shapes the flow.
Historically, control of the medium meant control of the printing press, the radio station, or the TV network. In the 20th century, media moguls like William Randolph Hearst in the U.S. and Rupert Murdoch globally shaped entire national conversations through newspaper empires and broadcast networks [1].
Today, the platforms are digital. YouTube, TikTok, Google Search, X, and Reddit are not public squares—they’re curated feeds governed by algorithms and terms of service. While governments don’t overtly censor—moderation, deplatforming, and shadowbanning achieve similar outcomes at scale.
China’s "Great Firewall" is a hard example: blocking Western sites and surveilling domestic platforms [2]. But the West uses soft power: demonetization, algorithmic suppression, and narrative amplification through partnerships with efforts like the Trusted News Initiative [3].
AI adds a new layer. Large models trained on filtered data reproduce those filters at scale. If a handful of companies control what the models train on, they control what billions of people learn. YouTube, for example, publicly removes videos that violate its content policies. These actions are detailed in its Transparency Report [4].
What to watch for:
Closed platforms with no algorithmic transparency
Overreliance on a handful of content distribution channels
Bans that disproportionately target dissenters
What to do instead:
Build on open protocols (RSS, email newsletters, federated platforms)
Host your own site. Own your domain. Publish in formats that can’t be easily erased.
Treat audience access as a system, not a channel. Diversify your distribution like a portfolio.
The medium is never neutral. If you don’t control it, someone else controls what people hear.
2. Control the Message
Narratives beat facts. Every time.
In 1933, Nazi propagandist Joseph Goebbels mastered this. The regime repeated simple, emotionally charged messages through radio, film, and newspapers to manipulate public opinion. Facts didn’t matter. The message did.
Modern democracies are more subtle. Today, elite consensus is formed through repeated talking points across media, think tanks, and universities. The COVID-19 pandemic showed how rapidly dissenting scientific views could be labeled "disinformation" and deplatformed—even when later validated (e.g., lab leak hypothesis) [5].
The framing is crucial. Is it a "freedom convoy" or an "insurrection"? Is it "climate skepticism" or "climate denialism"? The words prime the conclusion before the facts are even introduced.
AI models now generate these framings automatically, trained on years of biased corpora. Once embedded, the bias is amplified. And users often mistake AI fluency for neutrality.
What to watch for:
Repeated talking points with emotional framing
Shifting definitions (e.g. “disinformation” vs “disagreement”)
Smearing people instead of addressing ideas
What to do instead:
Learn to spot framing. Ask: what’s assumed? What’s left out?
Don’t just argue with data—tell better stories. Design the narrative, not just the facts.
Encourage charitable interpretation, even when you disagree. Precision over tribalism.
Truth competes in a marketplace of attention. Win on clarity, not just correctness.
3. Control the Authority
The appearance of neutrality is the most powerful mask.
The Catholic Church controlled theological truth for centuries by monopolizing education and literacy. Galileo's heliocentric theory threatened that monopoly. He was forced to recant under threat of excommunication.
Today, scientific truth often runs through gatekeepers: peer-reviewed journals, credentialed experts, and "fact-checking" groups. But these are not always neutral. During the Iraq War, many prestigious outlets and experts endorsed the claim that Saddam had WMDs—a consensus that turned out to be catastrophically wrong [6].
COVID again provides a modern case study. The Great Barrington Declaration, signed by prominent epidemiologists advocating focused protection, was censored and delegitimized—while strategies later aligned with it were quietly adopted.
With AI, credentialism takes on a new twist. People ask ChatGPT or Claude rather than reading the original study. The machine becomes the authority—even if its training data was institutionally biased.
What to watch for:
Conflicts of interest in research or journalism
Institutional gatekeeping (peer review as censorship)
Claims that only “experts” are qualified to think
What to do instead:
Trust models, not institutions. Ask: what incentives shaped this conclusion?
Prioritize transparency over credentials. Show your reasoning. Cite your data.
Decentralize authority—curate from diverse, conflicting sources. Let users compare.
Experts can be valuable. But when they all agree—check who’s funding the agreement.
4. Control the Emotions
People don’t believe what’s true. They believe what feels right.
The Red Scare of the 1950s didn’t spread on logic. It spread on fear. As did post-9/11 hysteria that led to the Patriot Act and warrantless surveillance. When people are scared, they surrender rights. When they’re ashamed, they self-censor. When they belong to a tribe, they rationalize its every belief.
Social media supercharges this. Anger and outrage are more viral than reasoned discussion. Likes and retweets enforce groupthink. And algorithms reward what keeps us scrolling—not what helps us think.
Now, with emotionally persuasive AI agents entering the chat, we’re up against something even more intimate. A chatbot that listens, empathizes, and reinforces our worldview can subtly shift our emotions without us realizing it.
What to watch for:
Emotional hijacking (panic, outrage, moral guilt)
Identity-based beliefs (“If you’re a good person, you’ll believe X”)
Social proof over truth (trending = true)
What to do instead:
Separate emotion from evaluation. Ask: “How would I feel if the opposite were true?”
Create space for nuance—especially on divisive issues
Don’t just tell the truth. Design for psychological friction. Make truth emotionally sustainable.
Emotional monopolies are upstream of belief. Break them by building resilience, not just rebuttals.
5. Control the Influencer
Personality is the new authority.
We don’t just consume content anymore—we follow creators. Podcasters, YouTubers, Substack authors, and X personalities build parasocial bonds with their audiences. They become part of our daily routines. We trust them not just because of their ideas, but because we feel like we know them.
But the stronger the bond, the harder it becomes to disagree. The line between persuasion and personality cult can blur quickly. And the incentives—likes, shares, monetization—reward conviction over complexity.
What’s more, we rarely know what pressures or incentives lie behind the content. Are they funded? Coordinated? Selectively platformed? It's not always clear.
What to watch for:
Audiences echoing influencers instead of forming independent views
Influencers who never disclose financial ties or ideological backers
Emotional loyalty to a figure outweighing evidence-based reasoning
What to do instead:
Follow people who disagree with each other
Ask yourself: would I still believe this if someone I disliked said it?
Prioritize ideas over identity. Separate the messenger from the message.
Influence is not inherently bad. But when personalities replace principles, truth takes a back seat to performance.
6. Control the History
Who controls the past controls the present.
George Orwell wasn’t exaggerating. In Stalin’s Soviet Union, old photos were literally edited to erase purged figures from history.
Modern digital platforms quietly edit history all the time. Wikipedia entries change daily. Search results bury certain topics. YouTube removes videos that violate content policies [4].
Soon, AI-generated histories will summarize the past for us—compressing centuries into a few paragraphs. But summaries reflect the summarizer. Which parts are omitted? Which figures villainized or canonized? Which events emphasized?
What to watch for:
Revisionist timelines or selective omission
Censorship of primary sources or documents
Euphemistic reframing (“peacekeeping mission” vs “invasion”)
What to do instead:
Preserve receipts. Archive important documents and videos.
Study original sources, not summaries. Go back to the primary texts.
Build your own timeline of major events. Don’t outsource your memory.
Truth has a half-life. If you want to remember it, you have to actively preserve it.
The Real Monopoly
The most effective monopoly on truth isn’t top-down. It’s ambient.
It’s the passive, uncritical acceptance of what “everyone knows.” It’s the slow erosion of dissent through apathy, fear, or fatigue. It’s not enforced—it’s absorbed. When that happens, you don’t need censors. You need cheerleaders.
That’s why the antidote isn’t rebellion. It’s responsibility.
Build your own infrastructure.
Question your own assumptions.
Teach others to do the same.
Truth is fragile. But monopolies on truth are even more so. Once exposed, they collapse under the weight of reality.
So build systems that resist capture. Design for open discourse. And never forget: truth isn’t what survives pressure—it’s what gets sharper because of it.