The Future of Commentary in an AI Age
AI can write opinion pieces now. Not just generic filler—coherent arguments on current topics, following opinion journalism conventions, producing prose that looks human-written. This is happening already, and it’s getting better.
So what’s the point of human commentary? If machines can generate plausible takes on any issue in seconds, why pay humans to do it slower and more expensively?
This is worth thinking through carefully, because the answers reveal what actually matters about commentary as a form.
What AI Does Well
Current AI can produce structurally sound arguments following standard opinion piece formats. It can marshal evidence, make logical connections, and write in various styles. For basic opinion pieces on straightforward topics, the output is often indistinguishable from human work.
It’s also fast and cheap. Generate a hundred different takes on an issue, pick the best one, publish. No writer to pay, no editorial process, no delays. The economics are compelling for publishers looking to fill space.
For certain kinds of commentary—recapping events, explaining straightforward policy, making obvious points about uncontroversial topics—AI is probably adequate. Maybe not brilliant, but adequate.
What’s Missing
But most of what makes commentary valuable is precisely what AI can’t do. Real insight doesn’t come from pattern matching on existing opinion pieces. It comes from genuine expertise, lived experience, and original thinking.
An AI can summarise what others have said about an issue. It can’t have worked in an industry for decades and bring that tacit knowledge to analysis. It can’t have personal experience of the thing being discussed. It can’t make connections nobody else has made because it can only recombine what it’s been trained on.
The commentary that actually matters—the pieces that shift how people think or introduce new frameworks—comes from humans bringing something genuinely new. AI can mimic existing commentary styles but can’t originate new ones.
The Expertise Question
Valuable commentary often comes from deep expertise. An economist explaining economic policy. A doctor analysing health system problems. A teacher discussing education reform. The value is in the expertise, not just the writing.
AI can fake this superficially—it can write in the style of expert commentary and include relevant concepts. But it doesn’t actually know things the way experts do. It’s pattern matching, not understanding.
This matters most when issues are complex or contested. AI can confidently make claims that sound expert but are wrong. A human expert would catch these errors; AI just generates plausible-sounding text.
The Voice Problem
Good commentary has distinct voice. You can identify writers by their style, their preoccupations, their way of approaching topics. This personality is part of the value—you read specific columnists because you value their particular perspective.
AI can mimic voices but doesn’t have one of its own. It can sound like an opinion columnist but there’s no actual person behind the voice with consistent views, experiences, or sensibility.
This might not matter for disposable content. But for commentary people actually value and return to, the human voice is central to the appeal.
The Moral Weight
When a human argues for a position, they’re taking responsibility for that argument. Their reputation is on the line. They might be wrong, might face criticism, might need to defend or modify their views.
AI-generated commentary has no stakes. The algorithm isn’t risking anything by making claims. There’s no accountability because there’s no person to hold accountable.
This matters ethically. Commentary isn’t just information—it’s someone taking a public position on issues that matter. That act of public positioning, with associated risks and responsibilities, is part of what gives it weight.
The Original Insight
The commentary that changes how people think comes from genuine insight—seeing something others missed, making connections others didn’t make, questioning assumptions everyone shares.
AI is fundamentally backward-looking. It’s trained on existing material and generates new text based on patterns in that material. It can recombine existing ideas but can’t truly originate new ones.
Human commentators can have genuine insights because they’re not just processing existing commentary. They’re bringing unique experiences, knowledge, and thinking to questions. That originality is what makes commentary valuable.
The Editorial Judgment
Even if AI could write adequate commentary, someone needs to decide what topics deserve commentary, what angles matter, what perspectives are missing from the discourse. That’s editorial judgment that requires human understanding of what matters and why.
You could theoretically have AI generate commentary while humans handle editorial selection. But then the humans are doing the harder and more important work—deciding what’s worth saying—while AI just executes.
What Gets Automated
Some commentary will get automated. Generic takes on predictable topics, recaps of events, explainers of straightforward issues. Publishers already treating commentary as filler will probably use AI for it.
This might actually be fine. If commentary is just content to fill space between ads, AI can do that. The loss to readers is minimal because this commentary wasn’t valuable anyway.
The question is what happens to commentary that’s actually good—that provides genuine insight, challenges assumptions, or helps people think differently. Can that survive when competing with free AI-generated content?
The Economic Challenge
If publishers can generate adequate commentary for free, why pay writers? This creates economic pressure even on genuinely valuable commentary. The good might get crowded out by the adequate if the adequate is free.
This is the general problem with AI in creative fields. Even if the best human work is better than AI, if AI work is cheap enough and adequate enough, economic pressure might not sustain the best human work.
The hope is that audiences will distinguish between meaningful commentary and AI-generated filler, creating market demand for the former. But that requires media literacy and willingness to pay for quality.
What Remains Valuable
Human commentary that survives AI competition will probably need to be distinctly human in ways that matter. That means:
Drawing on genuine expertise rather than general knowledge. Bringing personal experience and perspective. Taking real positions with real stakes. Developing distinctive voice. Making original arguments rather than recombining existing ones.
Basically, doing all the things that make commentary valuable but which are harder and more expensive than just generating text.
The Reader Question
This also depends on what readers want from commentary. If they want confirmation of existing views in readable prose, AI can provide that. If they want genuine challenge and insight, they need human commentators willing to provide it.
The current media landscape suggests many readers mostly want confirmation. But maybe that’s partly because we haven’t had better alternatives easily available. If genuinely insightful human commentary becomes rarer, perhaps readers will value it more.
The Opportunity
There might be an opportunity here for human commentary to differentiate itself by being more human. Less pretense of false objectivity. More personal experience and perspective. More willingness to say “I don’t know” or “I changed my mind.”
The things that make good commentary good—genuine expertise, original thinking, personal voice, intellectual honesty—become more valuable when contrasted with AI’s pattern-matching superficiality.
The Near Future
We’re probably heading toward a bifurcated landscape. Lots of AI-generated commentary filling space on content farms and struggling publications. And a smaller amount of genuinely valuable human commentary that’s distinct enough to command attention and revenue.
The middle ground—competent but not exceptional human commentary—is most vulnerable. It’s neither as cheap as AI nor as valuable as the best human work.
Writers in that middle should probably be thinking about how to move toward the distinctive human end rather than competing with AI on efficiency.
What I’m Watching For
The real test will be whether AI-generated commentary starts influencing public discourse in meaningful ways. Right now it’s mostly background noise. But if AI takes start shaping how people think about issues, that’s different.
Also watching whether readers develop ability to distinguish AI from human commentary, and whether they care. If AI commentary is adequate and readers don’t care about the difference, my argument here is wrong.
But I suspect people do care about knowing a human with stakes and expertise is behind what they’re reading, even if they don’t consciously articulate why.
The Bet
My bet is that genuinely good commentary remains distinctly human for the foreseeable future. AI will displace mediocre commentary, but the best human work—insightful, original, grounded in real expertise and experience—will remain valuable.
That’s partly wishful thinking—I’m a human writer who’d prefer to remain relevant. But it’s also based on understanding what actually makes commentary worth reading beyond just filling time.
The challenge is maintaining economic structures that support good commentary when cheap alternatives exist. That’s not about the technology—it’s about whether we collectively value genuinely insightful writing enough to sustain it.
I hope we do. But we’ll find out.