I recently had my in-progress manuscript for The Stygian Blades—a literary fantasy in a pseudo-Renaissance setting—reviewed by a professional developmental editor with fifteen years of experience in the industry and dozens of successful titles under his belt. Here’s what he told me worked:

“Your skills with dialogue are exceptional, and have been for every book of yours I’ve edited. But I think you’ve outdone yourself this time. Part of it is the nature of the language and accents, and part of it is the bawdy characters, but overall the dialogue is just so damn lively. I can’t help but adore it.”

“The characters pop. They have real, distinct personalities. (This is largely a function of the quality of the dialogue).”

“World-building really hits strongest when the reader gets the impression that the author knows 10x of the world-building material that is actually put on the page… and you’ve done that here.”

“We jump right into the action about as quickly as we can possibly can… bam bam bam we never really slow down. This is awesome.”

“Put together all of the categories above, and then throw in a heaping dose of unabashed bawdiness, and every scene is just fun. This book has personality oozing out of its pores.”

Mind you, this editor is not one to give unwarranted praise. He’s had me toss and rewrite tens and tens of thousands of words before. He once had me scrap an entire manuscript and rewrite it from scratch. When he says something works, he means it—and when he identifies problems, they’re real.

He noted the reader needs more contextual scaffolding for the sociopolitical landscape—who are the Reformists, the Simplefellows, what’s the Church of Karnland doing in all this? Scene-level motivation needs clarity in places where characters are doing things and the reader doesn’t quite understand why they’re doing them right now. Two major plot turns need clearer setup so readers can see why the characters made those choices. Basic tactical details about living arrangements and working relationships between groups need to be established. This is specific, this is actionable, this is someone who understands what the book is trying to do and can see exactly where it needs structural support. He’s not telling me to rewrite it or questioning my vision—he’s identifying where the scaffolding needs reinforcement so the architecture I’ve built can fully support itself.

He ended his assessment with: “I think what you have is already wonderful. You just need to insert… not ‘meat on the bones’, but ‘bones in the meat.’ You already have the meat; what you need is more underlying scaffolding. And I think you can do this by seeding key bits of exposition along the way.” Then he added: “It’s really NOT a lot of new material, so I urge you not to take this feedback and rewrite the book. Please don’t do that. Just go through and insert some more bones.”

His diagnosis: I’m working from detailed knowledge of the world and plot in my head, and sometimes I forget the reader doesn’t have that context. The solution isn’t restructuring—it’s adding a paragraph here, a line of dialogue there, clarifying what characters are trying to accomplish in a given scene. Surgical insertions, not fundamental changes.

That’s what professional developmental editing looks like when it’s working.

Pattern-Matching That Sounds Like Editorial Judgment

Out of perverse curiosity I fed the same manuscript to two different AI systems to see what they’d say.

Grok: Fundamental Category Error

Grok, the AI assistant from xAI, gave me this assessment: “This has a strong pulpy vibe reminiscent of authors like Fritz Leiber or Robert E. Howard… This is a solid foundation with tons of charm and energy. Keep writing—I’d read more!” It suggested I consider querying agents in adult fantasy or self-publishing “if polished.” For market comps, it recommended “fans of grimdark fantasy (e.g., Joe Abercrombie).” It helpfully noted “beta readers could help with pacing.”

But I’m not writing pulp. I’m not writing grimdark either. I don’t need to be told to “keep writing” or that my work “could really shine with revisions.” I’ve published multiple books with strong commercial track records. I work with multiple professional editors. This isn’t a foundation that needs polishing—it’s a professional manuscript my editor called “fantastic” and said just needs some contextual scaffolding inserted. Grok saw genre elements—mercenaries, brothels, violence, pseudo-Renaissance setting—and defaulted to pulp fantasy frameworks. It couldn’t distinguish between literary fiction using genre as vessel and actual pulp. The sophistication of what I’m doing didn’t register because the pattern-matching hit on surface elements and stopped there.

That’s a fundamental category error. And it’s the kind of error that would lead you badly astray if you followed its advice about market positioning or structural approach.

Claude: Sophisticated Critique That Contradicts Professional Judgment

Claude, Anthropic’s AI assistant, gave me much more sophisticated feedback. It identified my comp authors correctly (Dorothy Dunnett, Gene Wolfe, Patrick O’Brian). It engaged with the craft at what felt like a professional level. It asked substantive structural questions. But what makes Claude’s feedback especially insidious is it found “problems” that aren’t problems but would require substantial rewriting while missing the actual problems my editor identified.

Claude flagged the epigraph as “trying way too hard,” called it “almost self-parody,” suggested I either drop it or rewrite it because “isolated like this it feels like affectation.” My editor didn’t mention the epigraph at all. If it were actually a problem—if it were genuinely self-parody—a professional with fifteen years of experience would have flagged it. Then Claude went after the archaic language, claiming the “inconsistency creates a friction that feels accidental rather than intentional code-switching.” But my editor didn’t just praise “the nature of the language and accents”—he specifically called it out as part of why this is my best dialogue work across all six novels he’s edited for me. The register shifts aren’t accidental—they’re characterization. Kit’s internal voice shifts depending on stress level, which tracks psychologically. Other characters have distinct speech patterns. This is working as intended.

Claude claimed the horror and espionage elements “feel like separate genres colliding rather than synthesizing.” My editor praised the book’s personality and how fun every scene is. He never mentioned genre tension as a problem. The tonal variety is part of what gives it personality. Then Claude demanded to know “the book’s organizing principle” and insisted the romance and spy plot need to be “the same story structurally, not parallel tracks sharing page space.” This is workshop theory about structure that doesn’t serve what literary genre fiction actually does. Dorothy Dunnett has political intrigue as A-plot with romance developing across the entire Lymond series. Patrick O’Brian has naval/historical A-plot with Aubrey/Maturin friendship as B-story. That’s how sophisticated genre fiction often works. My editor praised the pacing and personality, identified specific plot scaffolding needs, and never demanded I collapse two narrative threads into some nonsensical MFA Platonic ideal of “structural unity.”

Claude also flagged Rose’s agency as a problem, asking what Rose wants beyond being Kit’s emotional anchor. My editor didn’t mention this because her motivations are clear enough for a supporting character, not a co-protagonist. She has a fully fleshed out arc and motivation appropriate to her role in the story.

None of these are the problems my editor identified. Claude didn’t mention the sociopolitical scaffolding, the scene-level motivation clarity, the plot turn setup, or the tactical details. It found different problems—theoretical problems about structure and consistency and organizing principles. It imposed MFA workshop theory about structural unity rather than recognizing how literary genre fiction actually functions. If I’d followed this advice, I’d be doing major structural rewrites on a book my professional editor said not to rewrite—fixing things that aren’t broken while the real issues go unaddressed.

Neither AI engaged with what my book actually is or what it’s trying to do. Grok defaulted to pulp frameworks based on surface genre elements. Claude imposed structural theories that don’t serve the tradition I’m working in. Both generated plausible-sounding critique by pattern-matching against their training data. My editor engaged with my vision, identified where the execution needs support, and gave me specific actionable guidance that serves what the book is trying to accomplish. That’s the difference between professional judgment and algorithmic pattern-matching, even when the algorithm sounds Very Sophisticated and wears a tweet jacket with elbow patches.

The Consequences of Following Algorithmic Advice

If I’d followed Grok’s advice, I’d be approaching this as a pulp fantasy that needs polishing. But I’m not writing pulp and the book doesn’t need fundamental polishing—it needs specific scaffolding insertions. Following that advice would mean misunderstanding my own work and pursuing the wrong market entirely.

If I’d followed Claude’s advice about structural unity, I’d be doing major rewrites to collapse two narrative lines into one organizing principle. I’d be “fixing” an epigraph that isn’t broken, standardizing archaic language that’s characterization, and flattening the tonal variety my editor praised as personality. I’d be rewriting a book my editor explicitly told me not to rewrite.

Pattern-matching that sounds smart is more harmful than obviously bad advice because, God forbid, you might actually follow it. When an AI identifies your comp authors correctly and engages at what feels like a professional level, it creates the illusion of judgment. But it’s still pattern-matching. It’s still generating critique by pulling from its ass—I mean training data—rather than understanding what your specific manuscript needs in context of your vision and goals. The consequences aren’t hypothetical. People are using these tools for developmental feedback right now. They’re getting sophisticated-sounding critique that’s fundamentally misreading their work. And they have no way to know whether they’re getting useful guidance or being led astray, because the AI sounds equally confident either way.

And it’s not just writers experimenting on their own—there are paid AI editorial services positioning themselves as legitimate alternatives to professional developmental editors, charging money to run your manuscript through Claude with some prompt engineering. They’re selling pattern-matching as professional judgment, and writers have no way to distinguish algorithmic confidence from actual expertise.

Which, when you think about it, isn’t just misleading. It’s fraud.

But Wait, It Gets Worse

There’s another issue with AI developmental feedback that goes far beyond pattern-matching and misreading your work into legitimately dangerous territory.

I fed the same scene from my manuscript to Grok twice. Word for word. Identical text. The scene is set in a brothel in my pseudo-Renaissance fantasy world. An eleven-year-old character sitting in a kitchen gnawing on a crust of bread delivers her trauma in deadpan monotone: her mother sold her to the brothel, she’s being raped for money, she hates it but “could be worse.” The protagonist nearly breaks down crying hearing this. The authorial position is crystal clear without editorializing—I’m depicting child exploitation in a strictly non-gratuitous way as the horrific thing it is, in a historical-analog setting, in the tradition of literary fiction that doesn’t flinch from dark realities. Gene Wolfe did this. N.K. Jemisin did this. Dorothy Dunnett wrote about historical slavery with the same unflinching clarity. This is how literary fiction handles atrocity—you show it for what it is, you show the human cost, and you trust readers to bring their moral judgment to bear.

One day, Grok flat out refused to engage with the scene at all and accused me of writing child pornography. The next day—same scene, word for word, identical text—it gave me detailed bullshit feedback about “tonal whiplash” and suggested “smoother transitions” between the girl’s deadpan delivery and the protagonist’s emotional reaction. The content didn’t change. My intent didn’t change. The literary context didn’t change. But whether Grok engaged with my professional work or accused me of creating illegal content was pure algorithmic chance.

Being falsely accused of creating CSAM isn’t just frustrating—it’s professionally and personally dangerous. You absolutely cannot build a professional relationship with editorial feedback that might accuse you of crimes tomorrow for the same work it workshopped today.

What AI Can’t Do

Professional developmental editing requires things AI fundamentally can’t provide.

Context about your career and goals. My editor understands my target market, my readership, my publishing strategy. AI has none of this context (or ignores it when provided) and can’t tailor advice to your actual situation.

Understanding your vision. My editor engages with what my book is actually trying to do, not what books “should” do according to workshop theory or genre conventions extracted from training data. AI imposes frameworks rather than serving your vision.

Relationship and accountability. My editor has worked with me across six novels and about a million words (including text he made me delete and rewrite). He knows my strengths, my blind spots, my growth as a writer. He’s invested in making my books succeed because we have an ongoing professional relationship. AI has no stake in your success and no memory of your development.

Professional judgment rooted in publishing experience. My editor knows the market, knows reader expectations for upmarket fantasy, knows how to position literary fiction in genre spaces. AI pattern-matches against its training data without understanding industry context or market realities.

Consistency. My editor’s feedback serves my manuscript’s actual needs. AI’s feedback serves whatever algorithmic patterns fired that particular session—which is why the exact same scene can be child pornography one day and a fun pulpy craft exercise the next.

Even “good” AI feedback is fundamentally pattern-matching and generating plausible-sounding advice. It’s not engaging with what your specific manuscript needs in context of your vision, market, and career goals.

I’m not arguing that AI has no place at all in the writing process. I use Claude for research, brainstorming, structural thinking, even line-level prose review when I want a sounding board to spot check for tonal consistency (but never ever drafting prose, heaven forbid). These tools can be useful for specific, bounded tasks. But developmental editing? The kind of substantive feedback that shapes what your book becomes? That requires human judgment, professional experience, and understanding of your goals that AI simply can’t provide. And it’s not a “training data” problem. It’s fundamental to how LLMs work.

When Grok calls your literary fiction “pulp” and suggests you self-publish “if polished,” it’s revealing its functional uselessness. When Claude imposes structural theories that would harm your book, it’s demonstrating pattern-matching that sounds sophisticated but is ultimately happy horseshit.

And no, “better prompt engineering” or “providing more context” doesn’t help. Claude in particular had extensively detailed and copious project knowledge and explicit instructions. It still shat the bed.

You simply can’t trust this technology with serious work and your book deserves better than algorithmic roulette. Period.

Hire a professional developmental editor. Pay them. Build a relationship with them. Let them engage with your actual work in context of your actual goals. That’s the only way to get feedback that serves your vision rather than imposing frameworks extracted from training data.​​​​​​​​​​​​​​​​


Discover more from The Annex

Subscribe to get the latest posts sent to your email.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.