Publishers are using AI to screen manuscripts before human eyes ever see them. The result? Your Renaissance fantasy gets flagged for “misgendering” because your heroine is disguised as a boy. Your PTSD story gets rejected as “gratuitous violence.” Your satire about racism gets auto-rejected as racist. AI can identify patterns—trafficking, violence, “problematic” content—but it can’t make judgments. It can’t distinguish between depicting evil and endorsing it, between satire and bigotry, between complex characters and harmful stereotypes. The books doing the most important work—exploring trauma, condemning systemic evil, trusting readers to think—are dying in the slush pile, rejected by algorithms that mistake thoughtful storytelling for risk.