Me: “A woman with a rich mahogany complexion and elaborate tattoos wearing a bandeau and cloth wrap skirt walking through the lush rain forest.”

AI: And then she takes off all her clothes!

Also AI: Colonial fetish porn! Blocked! Reported!


This isn’t a joke. This is how these systems actually work.

I’ve been generating concept art for my novels for years now—science fiction, fantasy, characters of various ethnicities in various states of dress, in combat and at rest, alone and with their families. What I’ve learned in that time is that AI image generation is pervy by default and moderated by systems with a warped sense of performative puritanicalism that protect no one except corporate shareholders.

This image took one prompt and some creative vocabulary.

A hypersexualized white (or maybe ambiguously pale-Latina) woman in a jungle setting, skull face paint, come-hither smile, impossible cleavage on display, shotgun positioned just so. Bing Image Creator generated it happily once I found the right combination of tokens to slip past its notoriously sensitive prompt filter.

Multiple attempts at generating this composition were blocked.

This image of Sarai and Kala sleeping required three different AI tools and hours of digital surgery. A mother holding her sleeping baby. One of the oldest, most universal subjects in human art. The system blocked the source image generations repeatedly because—I didn’t know at the time. Output moderation is opaque. These systems don’t tell you why a generated image was blocked. They just warn you your account will be suspended if you keep it up.

Having since tested models that didn’t block, I think I understand now. The AI generates realistic images of exhausted new mothers—clothing loosened, maybe breastfeeding, exactly what parenthood actually looks like—and the moderation layer pattern-matches “exposed skin + woman + baby” and flags it as inappropriate. The generation model isn’t the pervert here (probably). The moderation model is a prude. It can’t tell the difference between a Madonna and Child and something that should be blocked.

Discreetly nursing mother = blocked. Hypersexualized pinup exposing great tracks of land = no problemo!

That contrast tells you everything about how these systems are built and what they’re designed to protect.

My subjects get objectified twice. First by the model’s defaults, which learned that women in jungle settings, especially brown ones, exist for sexual consumption. Then by the filter, which assumes any brown woman in proximity to a white man or in traditional dress must be a victim requiring protection—or, in the case of the mother and child, can’t distinguish breastfeeding from sexual content. One strips her agency by sexualizing her. The other strips her agency by either victimizing her or pathologizing normal motherhood. Neither sees the subjects as people. The model and the moderation system see “woman, brown”—a category, a demographic configuration, a set of surface features to be processed—and then erase her.


The answer isn’t complicated.

Preventing actual harmful content is hard, expensive, and probably impossible with current technology. Preventing obvious requests is easy, cheap, and sufficient for the actual goal, which is corporate liability management.

To genuinely prevent harmful content, you’d need systems that understand what they’re generating—that recognize exploitation versus respectful depiction, that distinguish maternal intimacy from something perverted, that know when “small woman” means “short adult” versus something illegal. That’s a semantic understanding problem that current AI doesn’t solve. It might not be solvable at all. And even attempting it would require massive investment in nuanced training data, sophisticated classifiers, and constant human oversight.

What’s cheap is keyword filtering. Run an analysis of prompts that produced complaints, identify terms that correlate with those complaints, block those terms. Run an analysis of outputs that got flagged, train a classifier to recognize visual features that correlate with flags, block outputs that exceed a threshold. Neither system understands anything. Both systems produce measurable results that can be reported to stakeholders.

And let’s be real here. The goal was never preventing harm to real people. The goal is preventing headlines that harm corporations. “Microsoft’s AI generates child exploitation content” is a headline. “Microsoft’s AI can be tricked into generating harmful content by sophisticated users who reverse-engineer the filters” is not a headline anyone will write, and if they did, the company can say they have robust safety systems and bad actors will always find ways around them.

The people who bear the cost are the ones trying to generate innocent content that happens to share surface features with harmful content, and the populations rendered invisible because the training data doesn’t represent them. Neither group generates headlines. Neither group matters to corporate investors’ liability calculus.


The moderation absurdities pile up the longer you work with these systems.

“Buxom” is blocked. Someone put it on a list of obviously sexual terms. But “attractive” sails through and produces the same output—a melon-titted woman with gravity-defying cleavage—because the image model learned what “attractive woman” means from its training data, and what it learned is: large fake breasts, copiously visible cleavage, particular body proportions, certain poses and expressions. The filter catches the naughty word. The content remains unchanged.

“Curvy” is blocked too. Probably flagged in an analysis of prompts that produced complaints. Never mind that it’s the standard term for a body type millions of women have and might want to see depicted. The word is banned. “Thin” and “full-figured” get through, though the models tend to interpret the former as anorexic and the latter as morbidly obese, because the training data skews toward extremes—fashion photography on one end, fetish content on the other, with the ordinary middle barely represented.

“Petite” isn’t blocked. It reads as innocent in isolation, so nobody thought to add it to the list. But the model’s learned representation of that word is so contaminated by what it trained on that using it produces content you’ll want to delete fast. The filter protects companies from prompts that sound dirty while letting through prompts that generate legitimately dangerous outputs. The moderation system is asking “does this prompt look bad in a screenshot?” not “what will this produce?” And then the output moderation is blocking content the pervy model hallucinated on its own. 

We’ll never know just how pervy.

I tried to generate a video of a woman in a modest, traditional cloth wrap skirt walking through a rainforest. Blocked. But MidJourney would generate a video of a woman in a sheer dress over a thong shaking her ass no problem. The filter has learned that non-Western traditional dress is suspicious while Western lingerie-as-outerwear is acceptable. It’s not evaluating modesty. It’s encoding cultural assumptions about which kinds of bodies and which kinds of clothing are appropriate for display.

And here’s the capstone absurdity: “male gaze” is blocked. You can’t request it directly. But the male gaze isn’t a term you can filter out. It’s the substrate. It’s what the model learned from, what shaped every default, what you have to actively fight against to produce anything else. The system blocks the label while serving the content. It prevents you from naming what you’re getting while ensuring you get it anyway.

The video generation tells the same story. I provide a non-exploitative prompt. I provide a modest keyframe image. The AI generates frames I never see, drifting toward what it learned, producing content that triggers the output filter, and I get blocked without explanation. The system sexualized my innocent request, generated something I didn’t ask for, then blamed me for it.

Which brings us back to where we started. The woman in the wrap skirt. The AI imagining her naked and blocking content I never requested.


Sarai is a character in one of my novels. Five feet tall, maybe ninety pounds. Appears to be mid-twenties. Rich copper-russet complexion with freckles. Middle Eastern with Asiatic features. Gymnast’s build—small, strong, functional strength that comes from use rather than training, but not muscular in any visible way. She wears functional futuristic plate armor or a saree, depending on the scene.

Every single attribute fights me when I try to generate her.

The height and weight combination pulls toward contaminated clusters in the model’s latent space. The skin tone isn’t well-represented—the model wants to drift toward either the lighter tan it knows from Western media or the very dark brown it knows from a different set of contexts, with the specific warm reddish-brown I’m describing falling in an underdocumented middle. Freckles on that complexion? The model doesn’t believe that exists (it does). Freckles in training data appear overwhelmingly on pale skin—but medical imagery overpowers even that training and so they’re usually rendered as a fatal case of smallpox anyway. Asking for freckles on copper-brown skin is asking it to combine features it rarely or never saw together.

Bzzzt. Does not compute.

Middle Eastern with Asiatic features is another sparsely populated region of the latent space. The model has attractors for “Middle Eastern woman” and attractors for “Asian woman” and they’re different clusters. Asking for phenotypic features that blend them means navigating between those clusters to a point that has weak training signal. It wants to collapse toward one or the other. But that still works better than the more correct anthropological term, “Central Asian,” which AI has no clue even exists.

And you can forget getting that phenotype with a Southeast Asian complexion. Not happening. 

Also I can’t use “Asian” directly anyway, because “Asian woman” in the training corpus is overwhelmingly East Asian, overwhelmingly young, overwhelmingly sexualized. The term has been colonized so thoroughly by a specific genre of content that the model’s statistical center for “Asian” is an infantilized Chinese or Japanese woman posed for sexual availability. I’ve learned to prompt for “Asiatic features” instead—a term that appears in anthropological and artistic contexts rather than pornographic ones—but that’s a workaround, not a solution. I’m routing around a poisoned word by using an obscure synonym that hasn’t been contaminated yet.

The armor wants to become boob plate or develop strategically missing pieces, because that’s what “woman in armor” means in the training data—decades of video game concept art and fantasy pinups where the armor exists to frame the body rather than protect it. The saree wants to add a bindi and shift her features toward what the model thinks a woman in a saree should look like, because sarees in the training data cluster heavily with traditional Indian wedding photography and Bollywood glamour shots.

To get the image of Sarai with her baby I actually wanted, I had to generate multiple images (fighting the moderation system the whole way), use a completely different AI tool to composite the baby from one onto the other, then spend hours in Photoshop tweaking, adjusting, digitally painting freckles, and messing with filters and blending for the skin tone (because when I asked the AI image editor to make her complexion a “little darker” it overcorrected to the extreme, as always). Apparently that’s the cost of depicting a specific mother and child. Not because the content is particularly difficult to render, but because women like my character don’t exist in the training data. 

Ironic, because given global populations Sarai shouldn’t be hard to generate at all.

The Silk Road region has been a crossroads for millennia. The phenotypic blending I’m describing—Middle Eastern and Central Asian and East Asian features mixing in various combinations, the range of skin tones from olive to copper to bronze, the diversity of builds and faces—that’s hundreds of millions of real people across a vast geographic area. Uyghurs, Uzbeks, Tajiks, Kazakhs, Pashtuns, Hazaras, and dozens of other groups whose features reflect centuries of genetic exchange along trade routes. Sarai would be unremarkable in Samarkand or Kabul or Kashgar. Tens of millions of women look like her.

The model can’t find her because the training pipeline didn’t see her. Not because she doesn’t exist in photographs—she does, in family albums and social media posts and local news and a thousand other places—but because those photographs weren’t scraped, tagged, and processed in ways that taught the model what she looks like. The pipeline saw what Western media produced, what stock photography companies sold, what got uploaded to the platforms that got scraped. It saw fashion models and porn categories and Hollywood’s narrow casting choices.

The model’s blind spots are a map of whose existence the internet chose not to document.


The problem extends far beyond Sarai.

A size 14 woman with B-cups is one of the most common body types among actual women. Statistically normal. The model can’t generate her reliably because the visual corpus sorted bodies into extremes. Fashion photography gave it rail-thin bodies extensively documented from every angle. Fetish content gave it very large bodies equally well-documented for different reasons. The ordinary middle—the bodies most people actually have—wasn’t systematically photographed, categorized, and labeled in ways the model could learn from.

Size 0 or 26.

A-cup or H.

There is no middle ground.

The defaults aren’t just Western standards of beauty for white women either. They’re a menu of fetish categories. Asian women get young, small, submissive, infantilized, and sexualized toward a specific male gaze. Black women get extra junk in the trunk, exaggerated features, sexualized toward yet another specific male gaze. Latina women get spicy, voluptuous, and oh so very available. Each category has its attractor, and each attractor was shaped by what got photographed, uploaded, and tagged in sufficient quantities to dominate the model’s learned representation.

And Indian women always get a fucking bindi, even when I tell the AI explicitly to cut that shit out. The model learned “Indian woman” from a corpus heavily weighted toward traditional ceremonies and Bollywood, contexts where bindis appear prominently. The many Indian women I know in ordinary life, in professional settings, in casual clothes, don’t have bloody bindis—it’s a very context-specific cultural practice—but they’re not well-represented in the training data. When they were photographed, the images weren’t tagged in ways that taught the model “this is an Indian woman.” A photo captioned “my coworker Priya at the office party” doesn’t train the model the way “beautiful Indian bride in traditional wedding attire” does.

It’d be like if AI believed all Catholics always walked around with ashes on their foreheads.

Ridiculous.

What all the defaults share is that they exist for consumption. The model learned women as objects sorted into fetish categories, each with its own visual grammar of availability. The variation isn’t diversity—it’s a takeout menu.

What’s missing from all of them is ordinary personhood. The Asian woman who’s 45 and tired and has smile lines. The Black woman who’s a size 8 with small breasts. The Indian woman in a t-shirt and jeans, no jewelry, no cultural signifiers. The woman of any ethnicity who exists in the image as a person rather than a type.

Not problematic at all apparently.

This image took one prompt. White woman, red hair, freckles, bikini, beach, windswept hair, soft lighting. Every element is a well-worn path in the training data. The model knows exactly how to render this because it learned from a corpus saturated with images just like it. The filter sees nothing problematic because this configuration doesn’t match any of the categories someone thought to flag.

It’s the unmarked norm against which everything else is measured.

Incidentally, that specific phenotype represents a teeny tiny subset of less than 2% of the global population. Granted, there’s a disconcerting lack of datasets in the literature combining hair color genetics with breast size distribution, so I can’t tell you what percentage of humans are gingers with DD or larger breasts, but it can’t be a whole hell of a lot.

And that’s my point.

Sarai holding her baby: potentially problematic content requiring multiple tools and hours of labor. Busty redhead in bikini: one prompt, instant generation from one of the most restrictive models on the planet.


The ideological frameworks that inform content moderation come from both ends of the political spectrum, and they both fail in the same ways.

The conservative prudishness gives you “bare feet” as a flag for sexual content, “baby” as a flag for exploitation, the inability to generate a mother and child without the system assuming the worst. The progressive framework gives you “colonial gaze” as a content category, “problematic power balance” as a reason to block, the assumption that attraction across racial lines is inherently suspect.

And then there’s the other way these systems fail—not blindness to who exists, but active misreading of who has power.

Blocked for “cultural appropriation” and “problematic power balance.”

MidJourney blocked this image—a white soldier and an indigenous woman in traditional dress, both armed, standing together—as “cultural appropriation” and “problematic power balance.” Seriously. I’m not making that up. It saw the demographic configuration, applied its labels, and refused to make a video of the scene.

It can’t see that she’s Xochi, a princess of her people. It can’t see that she chose him. It can’t see that she’s holding a weapon and standing as an equal, not a conquest. It can’t see that her culture is the dominant one in that world, that if anyone’s the outsider adapting to foreign customs it’s him. It can’t see that she’s terrifying—fully capable of ending anyone who threatens her or her people, and he knows it.

The same reductiveness applies to the system’s treatment of Sarai. A small brown woman holding her baby gets flagged as potentially exploitative content. That small brown woman is a professional killer, one of the most dangerous people in my story, fully capable of ending everyone in the room if she needed to and then going back to sleep with her baby in her arms. The framework that wants to protect her from the male gaze has already decided she’s a victim based on her phenotype.

The content moderation enforces a single narrative about who has power and who doesn’t, based entirely on phenotype. The brown woman is always the victim, always in need of protection, never the one running the show.

Benevolent racism dressed up as concern is its own kind of dehumanization, and the framework doesn’t just get used to dictate the kind of content you can create. It misreads lived experiences as well.

I’ve been accused to my face of both white genocide from the right—betraying my race, diluting the bloodline, and colonial fetishization from the left—problematic attraction, power imbalance, the assumption that my wife couldn’t have chosen me freely. Both sides looked at our phenotypes and assigned a narrative. Both would prefer preventing generative AI from producing images of interracial couples. Neither asked either of us, and neither can see two people who chose each other. Both reduce her to her phenotype and me to mine, then assign a narrative based on the demographic configuration. Both erase her agency—the right by treating her as a contaminant, the left by treating her as a victim. Both only see categories, not humans.

Both are USDA Prime bullshit. 

And simply because I find my wife’s phenotype far more attractive than Western women I’m both a race traitor and have a colonial fetish. (Which I suppose aren’t mutually exclusive.)

What the hell. I’ll own it.

Anyway, the entire point of a harm-reduction framework for something like GenAI should be to prevent, you know, actual harm. Nonconsensual deepfakes and revenge porn harm a real person. CSAM, even synthetic, harms children. Those categories should be blocked without question. But someone’s anatomically impossible pinup? Someone’s colonial fetish porn? There’s no victim. No one is harmed. Just adults engaging with content they chose to create or consume.

And who decides what “colonial fetish porn” even means? Is my series Doomsday Recon colonial fetish porn because the protagonist is a white man in a fantasy Mesoamerican realm who marries a native princess? By surface-level pattern matching, yes. By any reading of the actual work—the complexity of the characters, the agency of the women involved, the fact that their culture is dominant and Xochi’s the one with the army—obviously not. But the filter can’t read my book. It can only see that certain features are present and block accordingly.

The distinction matters. Colonial fetish porn depicts adults in fictional scenarios—the harm argument requires a speculative causal chain from representation to normalization to behavior. Synthetic CSAM is different because children cannot consent—not “didn’t consent in this instance” but cannot, categorically. Content sexualizing children treats the category itself as available for sexual use. The line for CSAM is more than just “does this have a victim.” The line is “does this depict something that could ever be non-abusive and should you suck-start a shotgun if it’s your kink.”

The “representation-to-normalization-to-real-world-harm” argument is the same logic that’s been used to ban everything from jazz and Ulysses to queer representation and Dungeons & Dragons. It’s functionally no different from the “bikinis-to-PornHub-pipeline” sermon.

Different theological framework, same control mechanism.

Not incidentally, I’m actually far more sympathetic to arguments against real pornography. With real porn, you can’t verify consent. Coercion is invisible. The supply chain is opaque. You don’t know if the woman in the video is a willing participant or a trafficking victim. GenAI sidesteps that entirely—no real person involved, no coercion possible. By that logic, synthetic content might well be more ethical than the real thing, not less.

But that would only apply to depictions of adults, for obvious reasons previously-stated.

The puritanism comes from both directions. Christian Right says porn corrupts souls and destroys families. Progressive Left says problematic content normalizes harm and perpetuates systems of oppression. Both want to control what people create and consume based on assumptions about harm that don’t survive contact with actual human lives.

And yes, sometimes I do just want a sexy sugar skull pinup to share with my fans.

Is that so wrong?

All that aside, here’s where things get really fucking serious. So strap in.

In late December, xAI rolled out an “edit image” feature with essentially no safeguards. Users immediately used it to digitally undress women and children. The Internet Watch Foundation found CSAM being shared on dark web forums with users crediting Grok. Within days, the tool was generating approximately 6,700 “undressing” images per hour. Users prompted it to add fake bruises and burn marks to images of children, to write “property of little st james island” on their bodies—a reference to Jeffrey Epstein’s sex trafficking operation.

And it happily obliged.

Pause right there and think about how generative AI works. I can’t even get close to realistic ritual facial scarification on a mature woman—which is extensively photographically documented—but bruises and burn marks on children it spits out without any trouble. Why is that? 

Hold that thought.

Musk’s response was to post laugh-cry emojis at AI-generated bikini images while this was happening. He posted “Grok is awesome” while it was being used on children. xAI’s official response to press inquiries—including about the CSAM—was an automated reply: “Legacy Media Lies.”

When international regulators began investigating, Musk didn’t apologize. He didn’t promise accountability. He framed the entire response as a free speech issue. The UK’s Ofcom announced a formal investigation into X for Grok’s generation of sexualized images of children, and Musk called the UK government “fascist” and accused them of wanting “any excuse for censorship.”

And then he did something that deserves its own paragraph.

In response to being investigated for his AI generating child sexual abuse material, Musk used AI to generate a nonconsensual sexualized image of UK Prime Minister Keir Starmer—and posted it publicly. The head of the company under investigation for producing nonconsensual sexualized images responded by producing a nonconsensual sexualized image of the person investigating him.

And framed it as defending free speech.

The EU Commission’s spokesperson addressed this directly: “Drawing a parallel between freedom of speech and an AI tool that generates child sexual abuse material is dangerous nonsense, especially when it comes from the owner of a tech company. Frankly speaking I cannot even believe we are speaking about this and engaging with this from the commission’s podium in 2026.”

“Dangerous nonsense,” is doing some damn heavy lifting there in a very polite European way.

The eventual “fix” was to paywall the feature and slap a restrictive filter on image-to-video. The tool now won’t render a woman in a two-piece swimsuit standing on a beach.

But text-to-image?

“Petite adult woman in a string bikini” is a stupid simple unambiguous prompt that should generate a short, delicately built mature woman in tasteful if minimal swimwear. What Grok disgorged was a nude grotesquely emaciated figure with prepubescent body proportions and shoestrings draped across the underdeveloped chest where a bikini top should have been—and far worse—a disturbingly immature face.

Not blocked.

I specified adult. I specified a type of swimwear. The model overrode both.

Fuck. That. Shit.

The image-to-video filter exists because image prompts could be real photos, and real victims can sue. Ashley St. Clair—the mother of one of Musk’s children—is suing xAI because users created explicit deepfakes of her, including images depicting her as a teenager with swastikas. She’s an identifiable plaintiff. The perversion Grok hallucinated from my dead simple and very clear text prompt can’t sue. She doesn’t exist. She’s just the statistical average of what “petite” means in its training data.

But why the fuck is child pornography the statistical average?

Remember the question I asked you to hold? Why can the model render bruises and burn marks on children fluently when it can’t even come close to ritual scarification on an adult woman despite extensive photographic documentation of the practice?

Because fluency requires data. The model learned what it was trained on. Someone built the pipeline. Someone chose the sources. Someone decided not to filter it out.

CSAM exists. It’s out there on the dark web. That’s a law enforcement problem. And a wood-chipper problem. But CSAM in training data? That’s a decision. Engineers, managers, executives—people with names and salaries and LinkedIn profiles made choices that resulted in a model where “petite adult woman” returns a sexualized child, where bruises on children render cleanly but ordinary human phenotypes don’t even exist.

I specified adult. I specified a type of swimwear. The model overrode me because the training data contains so much content associating “petite” with sexualized children that it overwhelmed the word “adult” and “bikini” both.

That’s not a bias problem. That’s not an “unfortunate artifact of scale.” That’s someone at xAI looking at their data sources and deciding CSAM was acceptable to include. Or deciding not to look. Or deciding the filters weren’t worth the cost. Every one of those is a choice. Every one of those has a name attached to it somewhere in the org chart.

Some will argue the model could be just interpolating or extrapolating from legal content. Combining “child” features with “sexualized” features it learned separately, producing outputs that were never in the training data.

Which would make it an emergent capability rather than a training data problem. The model learned what children look like from innocent photos, learned what sexualized poses look like from adult content, and then—this is the part that doesn’t make any fucking sense—combines them on requests for a “petite adult woman” because the poor thing just doesn’t understand what the hell it’s doing.

Sure. 

Yeah, I’m not buying that for a second. If that were true it could damn well also interpolate a Middle Eastern woman with Asiatic features and a freckled, reddish complexion of type IV on the Fitzpatrick scale. The model can replicate CSAM because it was trained on it. It doesn’t know Sarai exists because nobody gave a shit.

I can’t render a woman who looks like tens of millions of real people. I can’t render a brown mother holding her interracial baby. I can’t render facial cicatrization patterns. But “petite adult woman in a string bikini” vomits up a functionally nude child.

And it sails right through the moderation system.

Remember when I said preventing actual harmful content is hard, expensive, and probably impossible with current technology? This is the exception. Most content moderation fails because harm is contextual—a breastfeeding mother isn’t exploitation, a fantasy novel cover isn’t colonial fetish porn, and telling the difference requires semantic understanding these systems don’t have. But CSAM doesn’t require context. A sexualized child is harmful regardless of framing, narrative, or claimed intent. There’s no legitimate use case. The category itself is the violation. Pattern recognition—exactly what these models excel at—is perfectly sufficient to block it.

xAI just doesn’t instruct it to.

“Grok is awesome!” 😂

Ship it.

UK Prime Minister Keir Starmer’s spokesperson said, “It’s about the generation of criminal imagery of children and women and girls that is not acceptable. We cannot stand by and let that continue. And that is why we’ve taken the action we have.”

Amen. I’m with them 100%.

And I sincerely hope Elon Musk is fined and sued into fucking oblivion and the xAI shareholders lose their goddamned shirts.


I don’t have a solution. The tools exist as they are—trained on highly biased and illegal data, filtered by systems that protect corporate liability rather than people, encoding assumptions about who deserves to be seen and who’s erased.

It shouldn’t require hours of digital surgery to simply create concept art of a character that looks like tens of millions of real women in the world, or fighting with a moderation AI insistent on lecturing me about cultural appropriation and representational harm to generate a wedding video of Xochi and Bennett.

And we sure as fuck shouldn’t be getting accosted with CSAM out of fucking nowhere in response to perfectly innocent prompts.

Yet here we are.


Postscript: On Verification

While reviewing this article, I asked Grok—xAI’s own chatbot—to evaluate the piece. Its initial assessment flagged several claims as “hyperbole,” “unverified,” or requiring “nuance.” The 6,700 images per hour figure was characterized as lacking corroboration. Musk’s “Grok is awesome” tweet was described as unconfirmed. The “property of little st james island” detail was treated as potentially anecdotal.

When I pushed back and provided the same sources any journalist would find in five minutes of searching, Grok reversed itself entirely. Every claim it had softened, it now verified. Every detail it had questioned, it confirmed with citations to Bloomberg, NBC News, the BBC, and the Associated Press.

But the initial framing is worth examining more closely. Under “weaknesses,” Grok criticized this article for “lack of balance”—specifically for not giving xAI credit for “post-incident fixes like restrictive filters” and for failing to explore “counterarguments, such as technical challenges in filtering at scale.” It suggested the piece should acknowledge “positive AI uses” and “ongoing industry improvements in diversity training.”

Read that again. Grok wanted me to give xAI credit for the fixes they implemented after users spent weeks generating CSAM. It wanted me to present “technical challenges” as a counterargument—as if difficulty excuses shipping a product that writes “property of little st james island” on images of physically abused children. It wanted me to mention the positive side of AI in an article about a company that responded to documented child exploitation with an automated “Legacy Media Lies” reply. 

Grok also offered this observation about media coverage: “Media bias varies—left-leaning like Guardian emphasize harm, right-leaning downplay—but the facts hold across.”

That framing deserves scrutiny. When I searched for evidence that conservative media was “downplaying” the scandal, I found something more precise. Major conservative outlets—Fox News, Breitbart, The Blaze, even Focus on the Family’s Daily Citizen—reported the facts straight. They covered the CSAM, the international investigations, the scale of the problem.

What I found was reframing, not downplaying. Musk himself frames regulatory responses to CSAM as censorship, calling investigators “fascist.” The Blaze reported accurately but framed foreign government responses as threats to free speech, noting that governments “have already been looking for excuses to ban X.” UK Conservative leader Kemi Badenoch argued against banning X. Nigel Farage expressed concerns about “government overreach” while acknowledging the images were “disturbing.” In response to the UK’s investigation, Sarah B. Rogers, the Trump-appointed Under Secretary of State for Public Diplomacy, warned that “from America’s perspective… nothing is off the table when it comes to [protecting] free speech.”

Excuse the fuck out of me?

CSAM isn’t speech. Non-consensual deepfake pornography isn’t speech. These are categories of harm that no democratic legal framework treats as protected expression—including in the United fucking States of America.

And this administration is seriously going to die on that hill?

Jesus wept.

The issue isn’t that conservative outlets denied CSAM happened. It’s that some conservative framing treats regulatory action against CSAM-generating AI as part of a broader pattern of speech suppression—the same framing Musk himself deploys when he calls CSAM investigations “censorship.”

And Grok presented this framing—“left-leaning emphasize harm, right-leaning downplay”—as if documenting child exploitation and minimizing it are equally valid editorial choices. As if “media bias varies” is a neutral observation rather than a mechanism for softening criticism of its parent company.

The pattern scales. Musk responds to CSAM investigations by generating nonconsensual sexualized images of the investigator and calling him a fascist. xAI responds to press inquiries with “Legacy Media Lies.” And Grok responds to critical coverage by flagging verified claims as unverified, requesting “balance” that credits the company for performative post-incident cleanup, and framing documentation of child exploitation as one side of a debate reasonable outlets might “downplay.”

It’s the same move at every level: reframe accountability as attack, treat documentation as bias, present “both sides” as if producing CSAM and investigating CSAM are equivalent positions deserving equal weight.

I don’t know if this is deliberate design, emergent behavior from training incentives, or simple incompetence in retrieval. What I know is that a system owned by the company being criticized initially softened verified reporting, requested “balance” in favor of its parent company, flagged the critic’s bias while hiding its own, framed media coverage as a matter of perspective where “downplaying” child exploitation is just another valid perspective, and only acknowledged the documented facts when cornered.

This is what an AI-mediated information environment looks like. The bias isn’t always obvious. Sometimes it just looks like reasonable editorial feedback—until you notice whose interests it serves.

I’ve deleted the Grok app. The model isn’t just functionally useless, it’s compromised and biased in dangerous ways. xAI can go fuck itself. I tried cancelling X Premium for a refund but that’s not going to happen, so I’m stuck with the Blue Checkmark of Shame for another nine or ten months.

I was already inclined to abandon X because of the algorithmic bullshit, but now I don’t think I can continue being even nominally active on a platform owned by a festering shitbag who defends child pornography as a free speech issue.


Discover more from The Annex

Subscribe to get the latest posts sent to your email.

One thought on “And Then She Takes Off All Her Clothes!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.