Everyone’s worried AI will replace authors. So I decided to test it. I fed Claude Sonnet 4.5 nearly 100,000 words of my YA space opera—the complete novel, 5,000 words of a prequel I’d already written, character guides, alien speech patterns, explicit instructions about my protagonist’s psychology. Then I asked it to write the next scene. The result? Competent genre prose that lost my protagonist’s voice entirely. It could analyze what made her voice work, explain it back to me perfectly, then defaulted to templates anyway when asked to generate prose. Grok 4.1 failed the same experiment. This isn’t about whether AI will improve. It’s about understanding what AI fundamentally can’t do—and what that means for writers.
The Real Threat to Indie Authors Isn’t AI
Any author who’s actually seen what AI models produce when attempting to write fiction and is still worried about being replaced is worrying about the wrong threat. (Or they’re a spectacularly mediocre author, but I digress…) And before you say “market saturation,” hold that thought. Because it’s moot. The market is already saturated by content … Continue reading The Real Threat to Indie Authors Isn’t AI
The Myth of the Prolific Indie Author
Every week, someone on Twitter defends the ultra-prolific indie author pumping out ten novels a year. They invoke “pulp speed” and cite million-word-per-year math. They insist it’s possible if you just work hard enough. They’re selling you productivity courses. Here’s the problem: they’re confusing typing with publishing. I write fast. I’ve banged out 134,000-word first drafts in six weeks. My peak year was 500,000+ words. And I still can’t hit seven published novels annually. Not even close. The bottleneck isn’t typing speed. It’s revision, editing, proofreading—everything that turns a first draft into a finished book. When you account for that work, the math collapses. Which means when someone consistently publishes 7+ novels per year, I’m calling it: they’re using ghostwriters.
The Evil Isn’t Coming; It’s Already Being Retweeted
Hannah Arendt went to Jerusalem in 1961 expecting to report on a monster. She found a middle manager instead—a bureaucrat who spoke in clichés, followed orders, and never thought about where the trains were going. Evil wasn’t demonic, she argued. It was banal. Ordinary. Thoughtless. Now, seventy years later, the banality of evil has a retweet button. When Donald Trump accused Haitian refugees of eating pets during a presidential debate, thirty-three bomb threats followed. When Charlie Kirk was assassinated, conservative circles erupted in eliminationist rhetoric against half the country. And millions of ordinary people hit “share” without thinking about what they were amplifying or where this pattern historically leads. This is Arendt’s framework applied to America in 2025 in real-time, while we still have a chance to stop where history warns we’re headed. Reading time: 28 minutes.
An Author of Dubious Literary Merit
I used to call myself an “author of dubious literary merit”—half joke, half truth. I write stories to follow characters through impossible situations and see what choices they’ll make and how they’ll live with them (and hopefully entertain readers in the process). I never set out to explore specific themes or craft philosophical arguments. Then a reviewer recently said my novel “Born in Battle” was “one of the top 5 books I’ve read this year” out of over a hundred novels including “War and Peace,” “Blood Meridian,” and “Crime and Punishment.” She described it as “the only book that, a week later, still makes me get up in the middle of the night with my thoughts about what academics would call enduring themes of human existence.” That made me stop and take a hard look at what I’ve actually been writing over the last few years, and why. Turns out I’ve been in conversation with authors like Lloyd Alexander, Ursula K. Le Guin, and N.K. Jemisin all along.
Eucatastrophe Isn’t Moral Order or: Why Reformed Readers Misread Both Tolkien and Martin
After writing my essay arguing whether “Game of Thrones” is nihilistic or hard-won humanism, I realized the real debate isn’t about Martin at all—it’s about Tolkien. Reformed apologetics has so thoroughly appropriated “The Lord of the Rings,” flattening its Catholic sacramental theology into moral triumphalism, that even Martin’s sophisticated critiques argue against the appropriation rather than the actual author. Tolkien wrote about grace redeeming failure despite permanent wounds. Martin inherited that same Catholic framework for analyzing tragic dilemmas—situations where both choices are objectively wrong—but stripped out the eucatastrophe. Reformed readers can’t see what either author does because they need moral order to reassert itself. The irony? Martin thinks he’s correcting Tolkien’s naïve triumphalism when Tolkien never wrote that. Both work from Catholic tragic moral theology. One just doesn’t believe in Grace anymore—or so he tells himself.
Game of Thrones Isn’t Nihilism—It’s Hard-Won Humanism
The critics who call Game of Thrones nihilistic have never been at the lever when the trolley’s barreling toward both tracks. Everyone “knows” honor matters—until honor costs you your head. Everyone condemns oathbreakers—until you’re sworn to both king and realm and your king plans genocide. Everyone thinks they’d never make Daenerys’s mistakes—until they’re holding power with advisors dead, ideals crashing against reality, and no good options left. Martin doesn’t write nihilism. He writes the gap between moral philosophy in the classroom and the trolley problem in real life. Between theoretical principles and the moment you’re actually forced to choose—knowing both tracks lead to blood and neither choice will let you sleep. The people calling that “unrealistic”? They’ve had the luxury of never pulling the lever.
AI Isn’t the Problem: Fraudulent Authorship Is
The indie publishing world accepts undisclosed ghostwriting—where someone else writes the prose and the credited author takes full credit—but treats AI-generated book covers as a betrayal of readers’ trust. This is completely ass-backwards. The line that matters to me is simple: did the credited author actually write the story? I don’t care how the cover was made. And why should I? How did we get to a point where fraudulent authorship practices are dismissed as “just business” but marketing materials created with AI-assistance are some kind of moral crisis?
Don’t Lecture Me About AI Ethics While Typing on Blood Cobalt
A Twitter user called me unethical for defending AI in the creation of book covers. “It is certainly unethical to use AI in the creation process of anything intended to be sold for profit,” they declared—while typing on a device built with components sourced through child slave labor and weaponized rape. Six-year-olds work 12-hour days in the DRC to fund armed militias. Indigenous communities lose their water to lithium extraction. Rare earth mining poisons entire provinces. Every electronic device you touch on a daily basis requires human suffering on a scale you probably can’t comprehend. But an indie author using AI for marketing? That’s the great moral crisis facing us today. So let’s talk about principles—and why critics can’t answer basic questions about their own.
Far More Authors Than You Think Are Using AI—Guess How Many Won’t Admit It?
Authors are quietly using AI for covers, marketing, research, plotting, and more, while anti-AI activists rage impotently on Twitter and threaten boycotts on BookTok that never materialize. When a Midjourney-generated cover won a fantasy reader popularity contest, 2,500 scrutinizing fans couldn’t spot it. Only forensic metadata analysis revealed the truth. The backlash came after disclosure, not before. Authors who admit AI use fear review-bombing and boycott threats. Authors who stay silent? They face nothing and collect their royalties because readers can’t tell and frankly DGAF. At least 45% of all authors now use AI for their work in some fashion—and you won’t believe how many of them don’t admit it.