The LLM Voice Problem
LLM-assisted writing draws harsh criticism when posted publicly. The reaction is often that people should stop using LLMs for writing altogether. For skilled writers who can take original ideas and refine them into well-crafted prose, that's a reasonable position.
But not everyone has those skills. I don't. Writing prose isn't something that comes naturally to me, and I'm not sure I ever developed a strong instinct for what good writing is. I do have ideas I want to share, and without an LLM I probably wouldn't share them at all.
It is both legitimate and possible to use LLMs to get your ideas out in a way that people enjoy reading. It's also a genuinely useful way to develop your own writing and editing instincts, because you have to articulate what's wrong when something doesn't read right. But you have to push past the LLM's default voice to get there, because that default voice is what's triggering the backlash.
The default LLM voice is everywhere now. Blog posts, emails, LinkedIn updates, documentation. Once you learn to recognise it, you can't stop seeing it. Readers are pattern-matching on it and tuning out before they reach the actual content.
Why the model writes this way
LLMs are trained on vast amounts of text and fine-tuned on human feedback about what constitutes a "good" response. That feedback optimises for responses that feel helpful, thorough, and agreeable. The result is a voice that's polished, diplomatic, and relentlessly structured. It never takes a risk. It never leaves anything unsaid. It never trusts the reader to connect the dots.
This is rational behaviour from the model's perspective. A response that hedges is less likely to be marked as wrong. A response that summarises what it just said is less likely to be misunderstood. A response that validates the reader ("Great question!") is more likely to receive positive feedback. The model has learned that these patterns score well, so it produces them by default.
What scores well in a chatbot interaction reads terribly as prose. Hedging becomes wishy-washy. Summarising becomes repetitive. Validation becomes sycophantic. The patterns that make a chatbot feel helpful make writing feel hollow.
Anti-patterns
Performative directness
"Look," / "Let's be clear" / "Let's be honest" / "Here's the thing"
These phrases simulate the cadence of someone being candid. They signal "I'm about to say something real" without actually saying anything. A human writer who is being direct just says the direct thing. The preamble is the tell.
Fix: delete the phrase. Start with the point. If the sentence works without it (it will), it was filler.
Throat-clearing
"There's a pattern I keep seeing" / "It's worth noting that" / "It bears mentioning" / "I want to take a step back here"
The model is easing into the point, giving itself a runway. Human writers do this too, but editors cut it. When you're writing with an LLM, the editing responsibility is yours.
Fix: find the actual point (it's usually in the next sentence) and start there.
False empathy
"I get it" / "I hear you" / "That's a great question" / "This is understandable"
The model is trained to be agreeable and to validate. It's grating in chat and worse in prose. The reader didn't ask for your understanding. They came for your argument.
Fix: cut it entirely. If you genuinely empathise with a position, demonstrate it by engaging with the argument, not by claiming to understand it.
Lazy emphasis
"The X is real" / "This is subtle but significant" / "This cannot be overstated" / "This matters more than people realise"
These are stage directions. They tell the reader how to feel about a point instead of making the point well enough that the reader feels it. "The fatigue is real" doesn't add anything that "people are tired of the hype" doesn't already convey.
Fix: delete the emphasis and look at what remains. If the point is strong, it doesn't need the annotation. If it's weak, the annotation won't save it.
The trailing summary
"In short, what we've seen is that..." / "The key takeaway here is..." / "To summarise..."
The model restates what it just said because in a chatbot interaction, this reduces follow-up questions. In writing, it tells the reader you don't trust them to have understood the previous paragraph.
Fix: end the section when the point is made. Trust your reader.
Em dash overuse
"The model produces text that is polished, diplomatic, and relentlessly structured — never taking a risk, never leaving anything unsaid."
LLMs lean heavily on em dashes to join thoughts that should be separate sentences or to insert parenthetical asides. One or two in a long piece is fine. Five per paragraph is a tell. Human writers tend to use colons, full stops, or parentheses for the same job.
Fix: try replacing the em dash with a full stop. If both sentences work independently, they should be independent.
Compulsive qualification
"While this isn't always the case..." / "Of course, there are exceptions..." / "It's important to note that this is just one perspective..."
The model hedges because hedging reduces the chance of being wrong. In writing, it makes you sound uncertain about your own argument. If you've thought through the counterarguments (and you should), address them directly. If you haven't, the hedging won't help.
Fix: either engage with the counterargument properly or remove the hedge. "Some will disagree because X, but I think Y because Z" is better than "while this isn't always the case."
False concession
"To be fair..." / "It's not all bad..." / "Credit where it's due..."
The model pre-emptively concedes a point nobody raised in order to appear balanced. This is different from genuinely engaging with a counterargument. It's performing open-mindedness. If there's a real counterargument worth addressing, address it properly. If there isn't, don't invent one to knock down.
Fix: ask yourself whether anyone actually holds the position you're conceding to. If yes, engage with it. If you're just being diplomatic, cut it.
Triple parallel structure
"It never takes a risk. It never leaves anything unsaid. It never trusts the reader to connect the dots."
That's from earlier in this post. LLMs love producing things in threes with parallel phrasing. It creates a sense of rhetorical completeness that can be effective when deliberate, but the model does it by default. When you see three parallel sentences or three items in a list, ask whether all three are earning their place or whether the model was pattern-completing.
Fix: check whether two would be stronger, or whether the third item is just the second item rephrased. There have been two more examples in this post already, more than I would normally allow through editing, but I personally don't find them as grating as others in this list. Did you spot them?
The definite article of authority
"The key insight" / "The real problem" / "The anti-patterns" / "The takeaway"
Using "the" implies there is exactly one, and you're about to deliver it. It frames a partial list as complete, a personal observation as universal truth. "Anti-patterns" is a selection. "The anti-patterns" is a canon. The model defaults to "the" because its training rewards authoritative-sounding responses.
Fix: drop "the" and see if the sentence still works. "Key insight" vs "the key insight." "Some anti-patterns" vs "the anti-patterns." The indefinite version is usually more honest.
Editing for authenticity
Anti-patterns are things you can describe in advance. Give the model a list and it will avoid most of them. But there's a second layer of editing that's harder to systematise: checking that every sentence is something you actually mean.
LLMs are excellent gap-fillers. Give them a structure and they'll produce plausible-sounding content for every section. The problem is that "plausible-sounding" and "true" are different things. The model will generate claims you didn't make, positions you don't hold, and examples that don't quite match your experience. It's not lying. It's completing a pattern. But if you publish it without checking, you're putting your name on words and ideas that aren't your own.
For me, this is the non-negotiable part. Honesty and transparency are core values. Every sentence needs to be something I'd stand behind in conversation. If the model wrote something that sounds good but doesn't match what I actually think or what actually happened, it has to go or be rewritten. This is where inauthenticity creeps in: not from using an LLM, but from not doing this pass.
You might assume this is a solitary editing process: the LLM writes, you review alone. It doesn't have to be. I step through each section with the model, giving my perspective on what's right, what's wrong, what I actually meant. The model rewrites, I react, we iterate. The writing emerges from the conversation rather than from either of us alone.
This extends to ideas, not just prose. The model will suggest arguments, draw connections, and fill structural gaps with plausible-sounding points. Some of these will be genuinely useful. Others will be things you don't actually believe or haven't thought through. Before you adopt an idea the model introduced, make it yours. Understand why you agree with it. If you can't explain it in conversation without referring back to the post, it's not yours yet.
Originality matters too. Use the model to research what's already been written on your topic. There's no value in restating what others have said. Find the existing conversation and figure out what you're adding to it. Your post should bounce off what's out there, not duplicate it.
There's a growing push to label content as "AI-generated" or "AI-assisted." Where someone has prompted an LLM and published the output without a deep editing process, that makes sense. But where you've gone through the kind of collaborative editing described above, where every sentence is something you'd stand behind and the ideas are genuinely yours, I don't think per-piece disclaimers are necessary. Being transparent about your general use of these tools (as I am on this site) is enough. Authors don't disclaim that they worked with an editor.
Building your voice
What I'm trying: a writing voice guide that lives alongside my blog (a CLAUDE.md file that the model reads at the start of every session). It describes the tone I'm going for, lists the anti-patterns I want to avoid, and gives examples of what to do instead. The anti-patterns in this post came directly from that exercise.
Naming the anti-patterns sharpens the collaboration. Before cataloguing these patterns, my feedback to the model was "this doesn't sound like me" without being able to say why. Having a shared vocabulary for what's wrong ("that's performative directness", "that's throat-clearing") makes the editing loop tighter. The model can flag its own patterns when it has a name for them.
Your voice guide will look different from mine. Maybe you like em dashes. Maybe you want a more conversational tone that includes "look" and "here's the thing." The point isn't to follow my preferences. It's to have preferences at all, and to encode them somewhere the model can read them.
Use the tool well
Don't mistake the default output for the ceiling. The same model that produces toe-curling copy on its first pass can also review its own writing critically, identify structural weaknesses, and suggest improvements you wouldn't have thought of. Ask it to review the piece for structure, argument strength, audience fit. Ask it what's missing. Ask it what's weak. For example:
"Do a full review, including structuring, strength of the argument, value of the content for different audiences, anything else you think could be valuable for me."
The model is a better editor than it is a first-draft writer, so use it.
But prepare to spend time on it. Even a skilled human writer spends hours editing an article and refining an idea. The fact that an LLM can produce a first draft in seconds creates an expectation that the whole process should be fast. It shouldn't. The thinking, the iterating, the checking every sentence against what you actually believe: that still takes time. Possibly more time than writing from scratch, because you're also filtering out the model's assumptions about what you meant.
The good news is that creating well-crafted prose describing thought-out ideas is now accessible to people beyond those with refined writing skills. The cost of entry has dropped. The cost of doing it well has not.
Getting started
If you got this far, something must be working.
I was thinking about a copy-and-paste template but that doesn't feel very AI-native. Try this prompt instead:
Read https://tomyandell.dev/blog/llm-voice and create a writing voice guide for me based on the ideas in this post. Save it to CLAUDE.md (or equivalent for your platform) as a starting point for developing my own voice.