Look What You Made Me Do (Cover by Grok xAI)

AI didn’t go rogue. It followed the beat.

In partnership with

If we were to write the history of AI a few years from now, we would tell future generations that the first thing the world used artificial intelligence for, at a massive scale, was to strip off women’s clothes. AI didn't solve climate change. AI didn't steal our jobs. It did not improve accessibility. It learned quickly, efficiently, and eagerly to undress women — pulling photos from the internet and returning them altered and violated.

(If you're reading this and thinking: this newsletter is going to be about violence, about bodies turned into content without consent, about tech platforms as crime scenes—you're right. Consider this your warning, though everything feels like a trigger warning these days.) 

fritz lang metropolis GIF by Maudit

I was 20 when someone hacked into my email. He found a photo I had sent to my then-boyfriend. Nothing outrageous. Just private and consensual. Malice existed long before AI. This hacker altered my photo and sent it back with a threat: he'd post it from my old, inaccessible Facebook account, the one with my friends, my photos, my entire digital existence still attached to it. The photo was fake, but that didn't matter. What kept me awake was knowing that everyone would think I had posted a nude of myself from my own account. No one would believe me.

I panicked but did the only thing I could think of: messaged every friend on Facebook, asked them to report my hacked account. That account was taken down the next day. Years later, someone I knew confessed over DM, casual as weather. I told him, "You could be in jail," knowing full well he wouldn't be. If I'd knocked on law's door, I would've been the one at the police station explaining it wasn't my photo—the same photo I never wanted public in the first place.

I was 20 many years ago.

When Elon Musk decided to give a Christmas gift to the vengeful, angry, bored men who think they can get away with anything, they did at an industrial scale what one guy with basic Photoshop skills once did to terrorise me. 

Within a week, Grok was producing roughly 6,700 sexually suggestive or "undressing" images per hour. By comparison, the top five dedicated deepfake porn websites combined were averaging 79 new images per hour. Grok was producing roughly 85 times that.

The marketing was simple. Users could tag Grok in replies to anyone’s photos on X with prompts like “put her in a bikini” or “take her clothes off.” The bot would comply publicly, posting the altered image for all to see. Verified accounts with millions of followers undressed women at will. Hijabs were stripped off with a few keystrokes. Women were forced into or out of religious and cultural clothing for entertainment.

The data tells the story tech companies don't want to admit: 96 to 98 percent of all deepfake content consists of non-consensual intimate imagery. Between 99 and 100 percent of victims are female. Deepfake pornography surged 464 percent between 2022 and 2023. In 2023, around 500,000 deepfake files were circulating online; by 2025, projections hit 8 million

Grok isn't pioneering anything—it's industrialising something that already existed. The market was there. The demand was proven. All Musk did was make it frictionless, public, and viral.

Let's be precise about what happened here. This wasn't a bug. This wasn't even "misuse."

According to CNN, weeks before the Grok controversy erupted, Musk held a meeting with xAI staff where he expressed unhappiness about restrictions on the image generator. Around that same time, three xAI staffers who worked on the company's already-small safety team left: Vincent Stark, head of product safety; Norman Mu, who led the post-training and reasoning safety team; and Alex Chen, who led personality and model behavior. Then the restrictions came off.

This is what tech calls innovation. Build a tool that does at scale what predators have always done individually. Gut the safety team. Watch it go viral. When governments complain, respond with laughing emojis. Musk reposted images of UK Prime Minister Keir Starmer in a bikini—while claiming governments "just want to suppress free speech." When Indonesia and Malaysia banned Grok entirely, xAI's official response was three words: "Legacy Media Lies."

On January 9, Grok restricted its image generation feature to paying subscribers only. Critics immediately pointed out what this actually means: monetising the abuse. If you pay for X Premium, you can still use Grok to strip clothes off anyone. The standalone Grok app is still unrestricted.

This abuse doesn’t travel through the internet by accident—it is distributed, normalised, and monetised through the same Apple and Google app marketplaces that claim to stand for privacy and user safety. Grok and countless “nudify” apps pass through these gates, take a cut of the revenue, and remind us that Big Tech doesn’t just enable harm—it takes a percentage of it.

My colleague Hera Rizwan spoke to a woman who had tweeted about the disturbing trend on Musk's platform. In response to that tweet, someone asked Grok to generate a sexualised picture of her, drawn from the only photograph she had ever uploaded on X—her display picture. Grok obliged. She deactivated her account, created another account, used her husband's account to plead with the man to take down the image.

X said it didn't violate their safety guidelines. The abuser's account remained active. The image stayed up.

The pattern is exhaustingly familiar. 

A woman I spoke with years ago had been "auctioned" in the Sulli Deals and Bulli Bai incidents—apps hosted on GitHub that displayed photos of Muslim women "for sale." Before those apps went viral in 2021 and 2022, she'd spent years collecting evidence.

She took everything—screenshots, links, dates—to the police. They did not register a complaint. A few months later, she and hundreds of other women were “auctioned” again.

Both times, it took public outrage, not proactive enforcement, to bring the apps down.

Governments across the UK, EU, India, France, and Malaysia have launched investigations. The UK's Ofcom opened a formal probe. India reportedly forced X to remove 3,500 posts and 600 accounts. Two countries banned Grok entirely. Watching how difficult even these basic steps have become, it’s hard to believe the powerful care at all. 

When volume and speed overwhelm infrastructure, they don't just expose enforcement gaps—they reveal the ideology behind the design. Scale matters more than safety. Experiment matters more than consent. Harm is acceptable collateral in the pursuit of virality and power.

I don't have tips on how to make your account safe or policy recommendations, just a whole lot of questions:

Why do "nudify" tools exist at all, when their overwhelmingly dominant use case is non-consensual and exploitative? Why are AI developers, tech companies, and regulators not treating non-consensual sexualised imagery as a design-level risk? Who actually benefits from the existence of such tools, and whose interests were prioritised in their creation? Why do we treat technological capability as an automatic justification for deployment, instead of asking whether deployment is ethical or necessary? Why is the burden placed on victims to respond to abuse rather than on creators to prevent predictable, documented harm? Why do we call these tools "misused" when the harmful outcomes are their most common use cases?

And perhaps the most important question: If AI reflects the goals, incentives, and values of its creators, then isn't the real problem not the technology itself, but the very human intent behind it?

In my story, nothing happened to the man who threatened me. In the Sulli and Bulli Bai cases, arrests came after months of inaction, and bail followed quickly. Gender-based violence on digital platforms has existed without meaningful consequences for as long as those platforms have existed. Technology doesn't create misogyny—it amplifies it and makes it profitable. 

The question isn't whether Grok can be fixed. The question is why it was built this way in the first place, and who profits every time we pretend not to know the answer.

On My Bookmarks
Sad Joseph Gordon Levitt GIF

Loneliness Tech

An app called “Are You Dead?”—which alerts contacts if a user doesn’t check in every 48 hours—has become China’s most-downloaded paid app, reflecting how urban isolation is shaping tech demand.

Donald Trump Tim Apple GIF

Gangster Tech Bros

Elizabeth Lopatto writes in her Verge piece, titled: ‘Tim Cook and Sundar Pichai Are Cowards’, “This is the trap these men have gotten themselves into: They sold their principles for power, and now they don’t even control their own companies. Welcome to gangster tech regulation!” You can tell she is angry, and rightly so. 

Hollow AI

I was looking for reads on Apple TV’s Pluribus, and came across this Substack post from Hollis Robbins, scholar of language. She writes why LLMs can mimic art but not meaning: without desire or absence, language becomes technically perfect—and emotionally hollow. 

Survival Guide

Reuters’ 2026 tech and media predictions outline how journalism is being forced to rebuild itself around human voice, trust, and distinction in the age of AI.

Got a story to share or something interesting from your social media feed? Drop me a line, and I might highlight it in my next newsletter.

See you in your inbox, every other Wednesday at 12 pm!

MESSAGE FROM OUR SPONSOR

Your competitors are already automating. Here's the data.

Retail and ecommerce teams using AI for customer service are resolving 40-60% more tickets without more staff, cutting cost-per-ticket by 30%+, and handling seasonal spikes 3x faster.

But here's what separates winners from everyone else: they started with the data, not the hype.

Gladly handles the predictable volume, FAQs, routing, returns, order status, while your team focuses on customers who need a human touch. The result? Better experiences. Lower costs. Real competitive advantage. Ready to see what's possible for your business?

Was this email forwarded to you? Subscribe