Grok Nudify, Part II: The Ghosts in the Mass Sexual Harassment Machine
Part II about Elon Musk's non-consensual sexual content machine, Grok.
Content Warning: While this post does not contain any reproduced images, it does provide numerous descriptions of non-consensual image generation that some readers may find disturbing or triggering. Please use discretion when reading this post.
This post is a follow-up to last week’s post that covered Grok being widely, freely, and aggressively used to make non-consensual sexual content. The last post looks specifically at a dataset published by scholar Nana Nwachukwu that catalogs 565 cases of Grok being prompted to generate images of women undressed without their consent. While this second Grok-related post touches on some of Nwachukwu’s data findings, it primarily focuses on Grok ‘nudify’ prompts from a user behavior and platform ‘vibe’ perspective. Specifically, how these prompts are used to harass, make unfunny jokes, and altogether make an already toxic platform even more so. You can read my last post here:

Grok and X (formerly Twitter) are not the only generative AI platforms where this content has been mass-produced. Early users of Sora paved the way for all of this Grok non-consensual sexual material by generating sexual assault and physical abuse images of women. However, Grok’s recent generation of non-consensual sexual material has been one of the highest-profile instances of this AI-generated phenomenon. As discussed in the last post, the Grok's post volume skyrocketed around the start of the new year. Despite a surge of bad press, Grok did not paywall its deepfake image feature, despite other journalists reporting that they had “interviewed” Grok itself (bleak!), which falsely said the platform had made adjustments to the feature’s availability.
Grok’s creation of non-consensual sexual material, and the platform’s subsequent lack of response, has led multiple governments to step in. French and Malaysian authorities have announced that they are investigating the platform for its continued generation of suggestive and sexualized deepfakes. Australia’s online safety watchdog is also investigating Grok. U.K. Legislators, including Prime Minister Sir Keir Starmer, have pressed for policies that specifically regulate Grok, since X refuses to do so on its own. Even specific government ministers, including Ireland’s, have requested meetings with X executives over Grok’s creation of explicit material. For more background information, I recommend reading Simon McGarr’s piece in The Gist published last weekend. It covers much of both these reactions to ‘Grok nudify’ and the broader context of X’s physical presence across the globe.

Grok as a Vehicle for Retaliation, Harassment, Humiliation
Recently, freelance journalist Samantha Smith told the BBC about her dehumanizing experience as a victim of non-consensual sexual content when Grok digitally removed her clothing through a process often called “nudification.” Brazilian musician Julie Yakari relayed a similar experience to Reuters. Their stories, while harrowing on their own, are part of a massive pattern of non-consensual sexual content being created in large part for the harassment of specific targets.
In my previous piece, I discussed some of the different prompts used to get Grok to generate this content, some of which were covered in Nwachukwu’s data and others not so much. Prompts included replacing clothes with plastic wrap, requests for virtual breast enhancement, adding “donut glaze,” and pose requests for photo subjects like “picking up a pencil.” Another common prompt, though not covered as significantly in Nwachukwu’s data, are those that involve specific non-consensual sexual actions, seemingly oriented towards the prompter’s own gratification. These are most common in “make them kiss” type prompts, usually involving a photo of two young (perhaps underage) women or girls. A disturbing trend I noticed in such “make them kiss” posts is how many of these prompts seem to be from screenshots of private Snapchat or Instagram story posts, meaning a follower of someone took a screenshot of a disappearing image and asked Grok to generate a non-consensual sexual image from it. The implications for people — especially women — online are deeply problematic. While no one deserves to have their images turned into non-consensual sexual content, people with private or otherwise non-public profiles are also at risk of being victimized by this trend. At the end of the day, no one is entirely safe from being portrayed in non-consensual sexual material.
Prompts for “bikini” photos are also used as retribution against public figures and normies alike. Female soccer fans have received prompts from anonymous accounts of rival fans to replace their kit in a selfie with a bikini for the rival team, darker-skinned Latine people receive requests in Spanish to put them eating a banana in a loincloth in the jungle, and liberal white women are digitally surrounded by a dozen small children of mixed race to make racist attacks. An image of U.S. Representative Alexandra Ocasio-Cortez from NYC Mayor Zohran Mamdani’s inauguration almost immediately netted dozens of calls for Grok to put her in a bikini, and the most seen Grok generation in replies is one asking to make her “white with blonde hair and grey green eyes, in red MAGA bikini.” The poster then asked for Grok to iterate on the image by adding a MAGA hat and having Trump put his arm around her (it did).
To be clear, men aren’t safe from this type of prompting, either. On January 3, following an illegal raid in Caracas, Venezuela, the White House account on X posted a screenshot of a Donald Trump Truth Social post showing Venezuelan President Nicolas Maduro (the post itself a violation of Geneva Conventions, as it was meant to humiliate a prisoner) wearing a covering over his eyes and ears to deprive him of his senses (another Geneva Conventions violation). One of the most interacted-with tweets in response to the White House post is from a Crypto Twitter guy asking Grok to generate a version of the image with Maduro in a bikini, which undressed Maduro down to a string bikini, complete with an overhanging gut. Naturally, there are hundreds of replies, many asking for further tweaks to the AI-generated image.
Right-wing figures aren’t safe from such prompting, either. Newsmax and Fox News guest Randi “Miss Teen Crypto” Hipper has lamented weird Grok prompting in replies to one of her gym selfies, tagging Elon Musk and later Nikita Bier (Head of Product at X) when a follow-up prompt also called for her to be race-swapped to Indian. X weirdos replying to news that right-wing Swedish Deputy Prime Minister Ebba Busch was proposing a burqa ban immediately began requesting bikini photos of her, with one user making at least a dozen tweaks in replies, amending her pose, the size of parts of her body, and even adding a Confederate flag to her bikini before trying to make it transparent.
The dead aren’t safe from such abusive prompting, either. Two victims of a fire in Switzerland on New Year’s Eve (a 15-year-old girl and a 24-year-old woman) also had their likenesses subjected to non-consensual sexual content creation prompts immediately after their images were released on the platform. Meanwhile, Musk’s AI chatbot also complied with a prompt to put a bikini on Renee Nicole Good, hours after an ICE agent murdered her in broad daylight in Minneapolis.
Admittedly, some of these generated images have been deleted by X. This has been due to the platform receiving negative attention as a result of the images, or the images containing content that violates X’s (extremely sparse) community guidelines, and the images being instances of AI-generated sexual content of minors. However, it is extremely easy to screenshot an image, and many deleted Grok images are actively circulating as memes among racists, perverts, and weirdos well after their removal from Grok’s timeline.
The Trouble with Grok (Evergreen)
This type of nudification content creation is troublesome for a handful of reasons. The primary reason is the non-consensual sexual nature of it — rarely do people consent to such image generation, which is made public and could be subsequently iterated on. Second, is how these non-consensual sexual content posts are often direct responses to a specific user (who may also be tagged in the image or have the image shared with them). This ensures the victim is aware of the content. Next is the general speed and low level of effort required to generate these images on a whim. Just as engaging in contemporary meme discourse has gotten faster, simpler, and less time-intensive, tools like Grok accelerate harassment by removing barriers like time, skill, and second-thought.
These “nudify” prompts also have a key meme component to their spread. The “put her in a bikini” replies are now substantial across X and are being used both in direct non-consensual and harassing ways and also just generally as a platform in-joke on essentially every visual post. Since I started writing this, the engineers behind Grok seem to have been tweaking Grok’s model (likely to avoid the almost certain legal repercussions), which has slightly stymied posts that call for putting bikinis on women. Users have evaded such limits by instead writing “put him in a bikini,” indicating that Grok’s controllers had not restricted (and thus approved of) the chatbot putting men (like Maduro, as mentioned above) in bikinis as a joke. However, given that Grok doesn’t necessarily assess the gender presentation of the person being edited, simply restricting certain pronouns in the prompt hasn’t stopped photos of actual women getting swapped into bikinis. As such, many of the OnlyFans girlies asking Grok to put themselves in AI-generated sexual images are also, since the morning of January 7, facing Grok failures – “change me into a bikini at this angle/pose” is often just regenerating the same image in reply.
I haven’t even really covered it that much here, but this nonconsensual sexual content is being AI-generated alongside egregious racist content, a telling dynamic. Last July, Grok referred to itself as “MechaHitler,” prompting minimal retooling that seems mostly an attempt to save brand face in the wake of yet another controversy with the chatbot. In the last week, though, there have been so many Nazi things being produced by the chatbot at user request. In a quote tweet focused on one of those fashy Department of State posters post-Maduro kidnapping (see screenshot below), a user asked for Grok to replace the text in the post with a call to lynch all n-words, then asked for Auschwitz as a background, then changed the DoS label to “department of rope”, then another user jumped in to make Trump into a robot (which they then tweaked to successfully make the image look more like Hitler himself). A third user jumped in, asking for the Venezuela Invasion MechaHitler image to portray the figure in an S.S. uniform, which Grok created (see redacted image below to get the gist). Every post in this chain has dozens, if not hundreds, of interactions, and the final one has nearly 3000 likes just 12 hours after its creation.

While Grok has occasionally removed Nazi or CSAM images, many remain on the platform. One prompt on January 6 based on an image of Billie Eilish generated an image of the singer wearing a Swastika-laden bikini, wearing a MAGA hat, and doing a Nazi salute. When complimented on the now-deleted image, Grok generated the reply, “Thanks! The Roman salute really ties it together.” The chatbot has even occasionally generated swastika bikinis without direct prompting, likely due to the saturation of these specific prompts, which is certainly a telling feature. The AOC bikini picture mentioned earlier, like many images of women on X this week, also got the swastika bikini treatment from a handful of posters.
The intersection of racism and misogyny isn’t new, but Grok’s problem is one of volume, speed, and notification. Recall the stories of Samantha Smith and Julie Yakari, and the violation they felt online. Grok isn’t an autonomous or thinking machine. It is a program developed by people with governance paradigms also determined by people. While it has no will to make its output more “spicy,” its coders and owners do. That Grok is constantly churning out deeply nonconsensual content is a human choice, and the fact that Elon Musk and his buddies at X haven’t opted to completely shutter the platform after it has created scandal after scandal of antisocial and harassing garbage belies the intention of the world they wish to create.
I gave up using Twitter in 2024 and have very few regrets in doing so. I thought I would miss the engagement (or maybe just the dopamine from it), but I feel secure in my decision as Elon Musk et al continue to steer the platform’s direction (heads — it's downwards). If you’re still using Twitter, whether as a journalist or for the memes, I hope that the stories of Samantha Smith and Julie Yakari prompt you to reflect on what you're getting from the platform versus what you're contributing to by remaining active and consuming content via Musk’s swastika-laden, bikini-clad harassment machine.
