AI Summarisation Is Quietly Killing Original Reporting
I watched a friend catch up on the day’s news in about four minutes. She opened an AI summarisation app, asked for a rundown of major stories, and got clean bullet-point summaries of articles from the Guardian, the ABC, the AFR, and a few niche tech publications.
She didn’t visit any of those sites. Didn’t see their ads. Didn’t hit their paywalls. Didn’t count as a pageview in their analytics. She got the information those newsrooms spent real money producing, and they got nothing in return.
This is the summarisation problem, and it’s accelerating faster than most people in media want to admit.
How the Value Chain Breaks
Traditional news economics worked like this: journalists investigate and write stories, publishers distribute them, readers visit the publication, publishers monetise through advertising and subscriptions, revenue funds more journalism.
AI summarisation breaks this chain at the distribution step. The information still flows to readers, but it bypasses the publisher entirely. The reader gets what they need without ever engaging with the source.
This isn’t new conceptually. Google News snippets, social media link previews, and news aggregators have been doing versions of this for years. But AI summarisation is qualitatively different because it’s comprehensive enough to eliminate the need to click through.
A Google snippet gives you a headline and two sentences. You still need to visit the article for the full story. An AI summary gives you the full story—key facts, context, quotes, analysis—in a paragraph. Why would you click through?
The Free-Rider Economics
Reuters Institute research has tracked declining direct traffic to news websites for years. AI summarisation tools accelerate this trend by extracting maximum value from articles while returning zero traffic or revenue to publishers.
The perverse incentive is clear: the more effort a publication puts into thorough, well-researched reporting, the more valuable its articles become as inputs for AI summarisation. The better your journalism, the more you’re subsidising tools that undermine your business model.
Publications investing heavily in investigation, analysis, and expert commentary are creating the most valuable training and summarisation content. And they’re the ones being hurt most, because their content is worth summarising.
Generic news wire copy that just reports basic facts is less affected—it’s fungible and available from multiple sources. The distinctive, valuable journalism is precisely what gets extracted.
This dynamic came up in a conversation I had with AI consulting help that works with media companies on technology adoption. Their observation was that newsrooms are simultaneously being told to adopt AI tools for efficiency while watching AI tools from other companies cannibalise their revenue. It’s a genuinely contradictory position.
The Legal Gray Area
Publishers have started fighting back legally. The New York Times’ lawsuit against OpenAI set the precedent that using copyrighted articles to train AI systems might constitute infringement. But summarisation occupies an even grayer area.
If an AI reads a published article and produces a summary in different words, is that infringement? Traditional fair use principles might protect summarisation, just as they protect human-written book reviews that summarise the book’s content.
But the scale is different. A human reviewer summarising one book doesn’t threaten the book industry. An AI system summarising every article from every publication in real-time threatens the entire news industry.
The legal frameworks weren’t designed for this scale of automated content extraction. Courts are still working out how copyright applies, and by the time definitive rulings arrive, the damage may already be done.
What Publishers Are Trying
Paywalls harder to bypass. Some publications are implementing technical measures to prevent AI systems from accessing their content. This works against some tools but creates an arms race, and it also blocks legitimate search engine indexing that drives human traffic.
Licensing deals. Several major publishers have signed licensing agreements with AI companies—the AP with OpenAI, News Corp with various tech firms. These provide some revenue but typically at rates far below what the content would generate through direct readership.
Robot.txt and AI-specific blocking. Publishers can technically tell AI crawlers to stay away. But enforcement is inconsistent, and blocking AI crawlers might also block legitimate services that drive traffic.
Creating their own AI tools. Some publications are building subscriber-only AI features—chatbots trained on their archives, personalised summaries for paying subscribers, AI-enhanced search. This keeps the AI value within the subscription ecosystem.
None of these are comprehensive solutions. They’re patches on a fundamental economic problem.
The Quality Spiral
Here’s what worries me most: if AI summarisation continues reducing traffic and revenue to publications that do original reporting, those publications will produce less original reporting. They’ll rely more on wire copy, publish more opinion (cheaper to produce), and reduce investigative teams.
This means AI summarisation tools will have less quality content to summarise. The summaries will become shallower, less accurate, less useful. But by then, the newsrooms that produced the good stuff will have been hollowed out.
It’s a slow-motion tragedy of the commons. Everyone benefits from quality journalism existing. Nobody wants to pay for it when they can get the information for free through summarisation.
The Reader Responsibility
I’m not going to pretend readers bear the primary responsibility here—the structural economics are the real problem. But there’s something worth acknowledging.
When you use an AI tool to summarise articles from publications you value, you’re choosing convenience over sustainability. The information feels free, but it was expensive to produce. Someone investigated, reported, wrote, edited, fact-checked, and published that content. The AI tool just extracted it.
If every reader of a publication switched to AI summaries, that publication would cease to exist. And then there’d be nothing to summarise.
This doesn’t mean you should feel guilty about using summarisation tools. It means the system needs structural solutions because individual reader behaviour won’t solve it.
What Might Work
Mandatory revenue sharing. If AI summarisation tools profit from content produced by publishers, they should share revenue proportionally. This requires regulation because voluntary sharing won’t happen at fair rates.
Attribution requirements. AI summaries should always link to source articles and present summarisation as an invitation to read the original, not a replacement for it.
Publisher consortiums. Collective negotiation gives publishers more leverage than individual deals. The News Media Bargaining Code in Australia attempted this with search engines and social platforms. A similar framework for AI summarisation tools seems inevitable.
Reader-funded models. Publications that build direct subscriber relationships are less vulnerable to traffic disruption. If readers pay for journalism directly, bypassing the article doesn’t eliminate the revenue stream.
The Uncomfortable Question
Is it possible that AI summarisation is just more efficient information delivery, and publishers need to adapt to a world where they’re not the bottleneck between information and readers?
Maybe. But “adapt” in this context often means “produce less journalism because the economics don’t support it.” That’s not adaptation—it’s degradation.
The question isn’t whether AI can summarise the news more efficiently than reading articles. Obviously it can. The question is whether we’ve created a system that values the efficient delivery of information while destroying the incentive to produce the information in the first place.
If we get this wrong, we end up with incredibly efficient summarisation of increasingly mediocre content. AI tools that can summarise anything, but nothing worth summarising.
That’s not a technology problem. It’s a governance problem. And we’re not solving it fast enough.