When Algorithms Decide What Opinions You See


You’re not choosing what commentary you read nearly as much as you think you are. The algorithm is choosing for you. Your social media feed, your newsletter recommendations, your “for you” pages, even your search results—they’re all filtered through systems optimizing for engagement, not for giving you a diverse or representative sample of perspectives.

This quiet curation of opinion is reshaping public discourse in ways we barely understand and rarely discuss. We argue about what opinions should or shouldn’t be allowed while ignoring that most opinions never reach most audiences, not because they’re censored, but because the algorithm didn’t show them to anyone.

The Invisible Filter

Here’s how it works: you follow some commentators on Twitter or Threads or wherever. The platform’s algorithm decides which of their posts to show you based on predicted engagement. It notices which types of commentary you interact with and shows you more of that. Over time, even among voices you’ve explicitly chosen to follow, you’re getting a filtered subset optimized for keeping you engaged.

The same thing happens with recommendations. YouTube suggests commentary videos based on what kept you watching before. Substack recommends newsletters similar to ones you already read. Podcast apps push shows “you might like” based on algorithmic similarity to your current subscriptions.

None of this is random. These systems are designed to maximize time on platform, subscriptions, clicks, watches. They’re not designed to expose you to challenging perspectives, diverse viewpoints, or ideas you wouldn’t naturally gravitate toward. That’s not the optimization target.

The result is that even people who think they’re intentionally seeking diverse commentary are getting algorithmically curated versions of diversity. You might follow voices across the political spectrum, but the algorithm shows you the ones most likely to make you engage—usually the ones you agree with or the ones that make you angry.

The Engagement Trap

Algorithms optimize for engagement because that’s what platforms monetize. But engagement isn’t a neutral metric. It’s heavily weighted toward emotional reaction: anger, validation, tribal affiliation, outrage.

Commentary that makes you think, “Huh, I hadn’t considered that angle,” doesn’t generate as much engagement as commentary that makes you think, “Yes! Exactly!” or “This is outrageous!” The thoughtful piece gets a like. The validating or enraging piece gets shares, comments, quote tweets, arguments.

So the algorithm learns to show you more of the validating or enraging content. Not because of some conspiracy to radicalize you, but because that’s what the engagement data suggests you want. The system is just doing its job.

This creates a feedback loop. Commentators notice what performs well and produce more of it. Platforms notice what keeps users engaged and show more of it. Users get increasingly sorted into engagement-optimized filter bubbles without necessarily choosing or noticing.

You’re Not Seeing Disagreement

One of the biggest impacts is how little genuine ideological diversity most people encounter in algorithmically curated feeds. You might think you’re seeing “both sides” because the algorithm occasionally shows you outrageous takes from the other team. But that’s not diversity—that’s rage bait.

Real diversity would be encountering well-reasoned arguments from perspectives you disagree with, presented by thoughtful people making their best case. The algorithm doesn’t show you that because it doesn’t perform well. Why would you engage with a careful argument from someone you disagree with when you could engage with a dunk on someone saying something stupid?

So you end up with a weird dynamic where your feed includes some representation of other viewpoints, but only the worst or most extreme versions. This makes you more convinced your side is right (look at how unreasonable they are!) while giving you the impression you’re encountering diverse perspectives.

The commentary you’re missing is the stuff that would actually challenge your thinking: people who share some of your values but reach different conclusions, or people who disagree with you but do so thoughtfully enough that you’d have to engage with their arguments rather than dismiss them.

The Professionalization of Outrage

Commentators who understand algorithmic dynamics have adapted. They know what performs. Nuance doesn’t perform. Careful analysis doesn’t perform. Thoughtful disagreement doesn’t perform. Outrage, dunks, tribal signaling, and validation perform.

Smart commentators can do both—write thoughtful pieces for their core audience while also producing high-engagement posts that the algorithm will amplify. The thoughtful work builds credibility and loyalty. The engagement-optimized work builds reach.

But there’s pressure to lean more heavily on the engagement-optimized stuff because that’s what the platforms reward with reach. And reach translates to subscribers, followers, influence, and money. The incentives all point toward giving the algorithm what it wants.

This means even commentators committed to thoughtful discourse are pulled toward producing more performative, engagement-optimized takes. Not because they want to, but because the alternative is obscurity.

Platform Power

The platforms making these algorithmic decisions have enormous power over public discourse. They’re deciding, at scale, which commentary gets amplified and which gets buried. This isn’t like editorial decisions by newspapers—it’s algorithmic curation affecting billions of people.

And the platforms are mostly unaccountable. The algorithms are proprietary black boxes. They change frequently without announcement. There’s no transparency about why you’re seeing what you’re seeing or what you’re not seeing.

When a newspaper editorial page decides what commentary to run, you can see the decision, critique it, hold them accountable, go to different newspapers. When an algorithm decides what commentary to show you, you mostly don’t even know it’s happening.

Can You Opt Out?

Technically yes, practically no. You can try to consume commentary without algorithmic intermediation: go directly to websites, use RSS feeds, manually check sources you want to follow. But this only works if you’re already aware of what you’re missing and deliberately seeking it out.

Most people won’t do this. They’ll consume commentary the easy way, through algorithmically curated feeds. Even people who are thoughtful about media consumption usually don’t have time to manually curate everything. The algorithm is just too convenient.

And even if you opt out individually, you’re still affected by algorithmic curation shaping which commentary becomes culturally influential, which commentators build large audiences, which perspectives dominate public discourse. You can’t fully escape the system just by not participating directly.

What This Means for Discourse

When algorithms control access to commentary, public discourse gets optimized for engagement rather than quality, insight, or diversity. The commentary that spreads is the commentary that triggers emotional reactions. The perspectives that reach large audiences are the ones that validate existing beliefs or provide enemies to be outraged about.

This makes productive disagreement harder. How can you have thoughtful debate when the commentary most people encounter is algorithmically selected to reinforce their existing positions or make the other side look unreasonable?

It also creates false consensus effects. When the algorithm shows you lots of commentary agreeing with your position and only the worst arguments from the other side, you get the impression your view is obviously correct and widely held. You don’t see the thoughtful opposition because the algorithm filtered it out.

No Easy Solutions

You can’t fix this by asking platforms to change their algorithms. The algorithms are working as designed. Engagement drives revenue. Platforms won’t voluntarily optimize for something else.

You can’t fix it by asking commentators to resist the incentives. Some will, many won’t, and the ones who do will have less reach than the ones who don’t. The incentives are structural.

You can’t fix it by asking audiences to be more intentional about their consumption. Some will, most won’t, and even the ones who try will be partially shaped by algorithmic curation of the broader discourse.

Maybe the answer is building new platforms with different optimization targets. Maybe it’s regulatory intervention requiring algorithmic transparency. Maybe it’s accepting that algorithmic curation is the reality and figuring out how to preserve quality discourse within that constraint.

What’s definitely not working is pretending the algorithms aren’t there, or that they’re neutral tools for helping people find content they want. They’re shaping what you see, what you think, and what becomes culturally dominant. Ignoring that isn’t making it go away.