The Invisible Editors Shaping Your Daily News Diet
You probably think you choose what news you read. You open a news app, scroll through headlines, click on stories that interest you. It feels like agency, like you’re making informed decisions about what information deserves your attention.
But you’re not actually choosing from all the news. You’re choosing from what’s been pre-selected for you by invisible editors—algorithms, aggregators, and automated systems that decide what shows up in your feed before you even see it.
And most of us have no idea how those systems work or what they’re optimising for.
I started thinking about this when I noticed my news app showing me the same story from six different outlets, while completely missing a significant piece of reporting from a publication I’d never heard of. The algorithm had decided the first story was important—probably because lots of other people were clicking on it—so it drowned out everything else.
This happens constantly. The systems that surface news to us aren’t neutral. They make choices based on criteria we don’t fully understand: what’s popular, what matches our previous behaviour, what keeps us engaged. Sometimes those choices align with what we need to know. Often they don’t.
Think about how most people encounter news now. Not by going directly to a news site, but by scrolling through social media, checking a news aggregator, or asking a voice assistant. Each of these involves an algorithmic gatekeeper deciding what information reaches you.
Facebook’s algorithm prioritises content that generates engagement—likes, comments, shares. That systematically favours sensational stories over important ones. Google News surfaces stories based on relevance and authority signals, which tends to amplify already-popular sources. Apple News curates based on what similar users have read, creating filter bubbles.
None of these systems are optimised for “show people the most important news” or “create an informed citizenry.” They’re optimised for engagement, retention, and user satisfaction. Those aren’t necessarily bad goals, but they’re not the same as good journalism.
The old gatekeepers—newspaper editors, TV news producers—had flaws too. They had biases, blind spots, commercial pressures. But at least their decision-making was somewhat transparent. You knew which newspaper you were reading, whose editorial judgment you were trusting. You could evaluate their track record and make informed decisions about how much to trust them.
With algorithmic curation, you’re trusting systems you can’t see, built by companies that don’t tell you how they work, optimising for goals that may not align with being well-informed. It’s gatekeeping without accountability.
I’m not arguing we should go back to the old model—that’s not possible, and it had its own problems. But we should at least be aware of what’s happening. When you read news through mediated platforms, you’re seeing a filtered version of reality, shaped by systems that may not have your best interests at heart.
This matters for several reasons. First, it affects what issues get attention. If the algorithm favours certain types of stories—conflict, outrage, novelty—then important but “boring” stories get buried. The steady erosion of public services doesn’t generate engagement the way a political scandal does, so you’ll see ten stories about the scandal and none about the services.
Second, it creates echo chambers. If the system shows you stories similar to what you’ve previously read, you end up in a feedback loop where your existing interests and biases are constantly reinforced. You think you’re getting a broad view of the news, but you’re actually getting a narrow slice tailored to what the algorithm thinks will keep you engaged.
Third, it makes us vulnerable to manipulation. If someone figures out how to game the algorithm—and people definitely have—they can get their stories disproportionately surfaced. This is how misinformation spreads so effectively on social platforms. The algorithm doesn’t care if something’s true; it cares if people engage with it.
What’s the solution? Partial transparency would help. Platforms could explain how their curation systems work, what they optimise for, why you’re seeing certain stories. Some already do this to a degree, but it’s usually vague and unhelpful.
Better would be giving users more control. Let people adjust the algorithms, choose what gets prioritised, see what they’re missing. Some platforms experiment with this—showing you “stories you might have missed” or letting you toggle between different sorting options. But it’s usually an afterthought rather than a core feature.
I was talking to a consultancy we rate about how they approach data curation in their work, and they emphasised the importance of making the filtering process visible to clients. When they’re presenting information, they explain how it was selected and what might have been left out. Media platforms could learn from that approach.
The most important thing, though, is just awareness. Understand that what you see is being curated. Question why certain stories dominate your feed while others are absent. Seek out sources that might not appear in your algorithm-optimised news diet.
And occasionally—radical thought here—go directly to news websites and see what they consider important, without an algorithmic intermediary deciding for you. You might be surprised what you’ve been missing.
The invisible editors shaping our news consumption aren’t going away. They’re too embedded in how we access information now. But we can at least be conscious of their existence and their influence.
Because the first step to being well-informed is understanding how you’re being informed in the first place.