Short answer: Intentional under-editing is a design principle, not a flaw. An AI editor that deliberately under-edits is better than one that tries to catch everything. Under-editing leaves work for human review that was going to be reviewed anyway. Over-editing creates extra work for reviewers.
An AI editing tool that deliberately skips corrections sounds like a broken product. You're using it to catch mistakes. Why would you want it to leave some in?
It's a reasonable objection. And if you've evaluated AI editing tools, you'll know that most of them don't work this way. They make every change they can find. They push for comprehensiveness because that's how they demonstrate value. More edits, more corrections, more changes!
But that model is based on a workflow that hasn't changed since the typewriter era. To understand why intentional under-editing is actually the smarter design in the age of AI, you have to start right at the beginning: with the nature of editing itself.
There is no such thing as a perfect edit
Every professional editor, regardless of skill or experience, makes the same two types of mistake at some point:
- They miss things that should have been changed.
- They change things that should have been left alone.
These aren't signs of incompetence. Professional editors make those mistakes far less frequently than the rest of us would! Those are simply structural features of editing under real conditions: time pressure, long documents, and the irreducible difficulty of knowing exactly what an author intended.
Human editors navigate this tension between under-editing and over-editing every time they open a document. Experienced editors develop judgment about when to correct and when to query. But no editor eliminates either possibility entirely. As The Chicago Manual of Style describes, “An experienced editor will recognize and not tamper with unusual figures of speech or idiomatic usage and will know when to make an editorial change and when simply to suggest it.”
The same is true of AI. No AI editing tool catches everything. No AI editing tool avoids every incorrect change. The question is not whether your tool will under-edit or over-edit. It will do both. And it will do them both far more frequently than human editors because it can't query authors. The reason for this goes deeper than prompting — it is an architectural limitation of how large language models work, which is explored in detail in why editing tools break your style rules.
Since the problem is inevitable with AI editing, the question to ask is which type of failure is worse if editing is left to an AI. This is the question that most AI editing tools have never seriously considered.
Why over-editing costs more than under-editing
The costs of these two failure modes are not equal. Understanding the difference changes everything about how you should design and evaluate an AI editing tool.
When an AI editor under-edits, the cost is limited. The document reaches the next stage of review with some errors still in it. Those errors are caught during the human review that was already going to happen. No new work has been created. The reviewer reads the document, finds the error, corrects it, and moves on.
When an editor over-edits, a different cost is triggered. The change lands in the document. The reviewer encounters it. They now need to decide whether the change is an improvement or a mistake, understand what was altered and why, and correct it if it's wrong. That is extra work. Work that exists only because the AI created it.
The more serious version of this problem is harder to see. When an AI tool returns a "clean" document with no audit trail (no record of what it changed) the reviewer has no way of knowing which parts of the text have been altered. To be confident nothing has been damaged, they have to read every word from scratch. The speed advantage of using AI disappears completely. In some cases, the review takes longer than it would have without the tool.
Over-editing doesn't just cost time at the moment of correction. It erodes trust. Once a reviewer suspects that an AI tool is making questionable changes, they can't afford to skim. They have to treat every sentence as potentially modified. That's the opposite of an efficient workflow.
| Failure mode | What happens next | Work created |
|---|---|---|
| Under-editing (missed error) | Error is caught in the human review that was already planned | None — the existing review handles it |
| Over-editing (incorrect change, visible) | Reviewer must evaluate and reverse the change | Extra review work on top of existing workload |
| Over-editing (no audit trail) | Reviewer must re-read the entire document to check for damage | Entire review benefit is lost |
Under-editing leaves the reviewer's existing workload intact. Over-editing adds to it.
The position in the workflow changes what the tool needs to do
FirstEdit works from a different position. It runs before human review, not instead of it.
That approach to workflow changes everything. It makes intentional under-editing not just defensible, but optimal.
Most AI tools are positioned as comprehensive checks. That makes sense if you're working in a centuries-old workflow where the tool interacts with the reviewer and the amount of content is manageable.
But in an AI workflow where the reviewer's time is more precious and the workload is much higher, it makes sense for the reviewer to work with a document that has already had a first pass.
A first-pass edit doesn't need to find every mistake. It needs to catch high-confidence issues quickly, consistently, and without introducing new problems. Its purpose is to prepare the document for human review, not to substitute for it.
Think about what that means in practice.
A reviewer who opens a document that has been through a reliable first-pass edit doesn't need to spend so much time on spelling, brand terminology, or straightforward inconsistencies. They're not distracted by random spelling squiggly lines. They can spend their time on meaning, tone, argument structure, and the judgment calls that require human expertise.
That is a better use of their time. It's also a better outcome for the document. The reviewers can focus on the message instead of hunting for misplaced hyphens.
How intentional under-editing is built into the architecture
FirstEdit's approach to under-editing isn't passive. It's the result of specific architectural choices.
Two mechanisms drive it. The first is confidence thresholds. FirstEdit makes a change only when it's highly confident the change is correct. When a case is uncertain, it either flags the issue for the reviewer's attention or skips it entirely. Nothing is guessed at. This prevents the over-editing problem by design, not by chance.
The second mechanism is an independent verification layer. Every change FirstEdit considers is checked by a second, independent AI model before it reaches the document. If the two models don't agree, the change doesn't happen. This eliminates a large class of errors that general-purpose AI agents routinely make: plausible-looking changes that alter meaning, introduce new inconsistencies, or misapply a rule.
The practical effect is a document where every tracked change is reliable. Reviewers don't need to scrutinise each edit on the chance that the AI has made a mistake. They know that if FirstEdit made a change, it's almost certainly correct. That reliability is what makes review faster.
Ready to see the difference? Try FirstEdit on a document you're already reviewing and see how long the review takes.
What it means for your reviewers
Consider what a reviewer's attention is actually worth. In proposal teams, it's a bid manager's time in the 48 hours before a submission deadline. In medical writing, it's a regulatory expert's review of a clinical document before submission. In communications, it's a senior editor's pass before content goes to a client. That attention is limited and valuable. The right question to ask about any AI editing tool is whether it makes that attention more or less effective.
FirstEdit is designed around that principle. It catches what it's confident about. It leaves a clear audit trail of every change. It hands reviewers a document that is faster to work through. It's not just because every mistake has been fixed, but because the reliable mechanical work has been done, and what remains is work that genuinely requires human judgment.
That is the first-pass model. And for document review processes that run at any scale, it is the model that actually works.
Why an AI editor that leaves mistakes in is more efficient than one that catches everything
What is the point of an AI editor that leaves mistakes in the document? The same point as any tool that's well matched to its role: it does its job well so the people using it can do their jobs better.
The surprising thing about intentional under-editing isn't that it leaves errors behind. It's that it makes the overall process of finding and fixing errors more efficient than comprehensive AI editing does.
A few principles are worth holding onto:
- Under-editing and over-editing are both inevitable in any editing process. The question is which creates more work.
- Over-editing adds work that didn't need to exist: extra review, extra scrutiny, and eroded trust in every change the AI made.
- Under-editing leaves the reviewer's existing workload intact. The errors are caught in the review that was already happening.
- A first-pass AI editor positioned before human review needs only to make that review faster — not to replace it.
- Reliable, auditable changes that reviewers can trust are worth more than comprehensive changes that require verification.
If you've been thinking about AI editing as a way to eliminate human review then you're starting down a path that ends in poor quality documents. If you're responsible for the quality and efficiency of a document review process, using the AI for a first-pass is a workflow that efficiently delivers the quality you're looking for.
Try FirstEdit on a document you're already reviewing →
Frequently asked questions
Because the alternative is an AI that tries to fix everything. That creates more work than it saves. When an AI over-edits, reviewers must check every change to find the incorrect ones. When an AI under-edits, errors are caught in the human review that was already planned. Under-editing leaves the reviewer's workload intact. Over-editing adds to it. For a tool that sits before human review, intentional under-editing is the smarter design.
A confidence threshold is the level of certainty an AI editing tool requires before making a change. A low threshold means the tool makes changes even when uncertain (producing more edits but also more errors). A high threshold means the tool only makes changes it is very confident about, skipping ambiguous cases rather than guessing. FirstEdit uses a high confidence threshold: when uncertain, it flags the issue for human review or skips it entirely.
Independent verification means every proposed change is checked by a second, separate AI model before it reaches the document. If the two models disagree, the change does not happen. It is a structural safeguard against over-editing built into the architecture that eliminates a large class of errors that single-model AI editors can produce.
When every tracked change in a document is reliable, reviewers do not need to scrutinise each edit on the chance the AI made a mistake. They can trust that if a change appears, it is almost certainly correct. That trust (combined with the direct time saving of not making the edits) is what makes review faster.