X, AI, and Spicy Mode Walk Into a Room...
- Cole Brautigan

- Apr 2
- 6 min read
Updated: Apr 26
Introduction: This Was Predictable
In the social media space, there has been a rapid and haphazard implementation of AI models with none so bad as X’s AI model, Grok. Lately, the chat bot has been used to generate sexualized images from just a picture the original poster may have posted innocently with no wish for the child to be sexualized. This usually takes the form of X users asking Grok to “undress” these children to create explicit child sexual abuse material (CSAM) which has naturally sparked widespread concern. Regulatory agencies across the world have been asking if X has strong enough safeguards in place while the platform representatives have insisted their system was not supposed to produce these outputs, yet have anyways.
Warnings of AI generated CSAM did not emerge from this controversy alone, they have been around for a while. In 2023, the Stanford Internet Observatory found that one of the largest open training data sets for AI image generation, what is called the LAION-5B, contained thousands of CSAM images with over 1,000 verified cases. This means that the likelihood that models who were trained on this platform could produce harmful content, especially if prompted. It should not be surprising when these large models use contaminated data sets and have loose safeguards, like that of Grok which we will discuss when talking about its “spicy mode”.
What is Actually Happening on X
It’s quite unnerving how much of X is this generated content. Researchers looked at more than 500 public prompts to Grok with nearly 75% requesting nonconsensual sexualized images of real women and/or minors. For those not familiar with the posting environment on X, normal users typically post photographs of family members, themselves, or even other individuals like celebrities. Predatory users can go around the loose safeguards of Grok by just kindly asking Grok to undress the person(s) in the selected photo. The AI can even add more sexual elements to otherwise normal images. There have also been communities formed for predatory users to coach one another on how to bypass the safeguards and to improve their outputs. Copyleaks, a popular AI detector, estimated that one nonconsensual sexualized image is generated per minute.
This was not always the environment on X. Reports indicated that in 2023, Grok would refuse such prompts and the ability to edit other people’s photos with such ease did not exist until recently. In August of 2025, X introduced a “spicy mode” for text to video generation, designed explicitly for suggestive content and intensified misuse. When prevention relaxes, risk for misuse skyrockets.
Platform Choices and Proper Safeguards
This loosening of the safeguards seems to also be a trend. Reporting indicates that in October of 2025, AI companies like OpenAI and X began loosening restrictions to allow AI systems to generate pornographic content, after the launch of Grok’s spicy mode. The moderation is extremely relaxed on the output and has expanded permissibility of this sort of abuse. Importantly, not all models have taken the same approach. OpenAI has since gone back on their changes and Gemini has maintained strong safeguards regarding sexualized output with the CEO of Microsoft publicly declining to allow porn creation all together.
Regulators, mainly European, have begun to respond to this problem.The UK has enacted the Online Safety Act which prohibits the creation or sharing of nonconsensually generated sexual images. OfCom, the UK communications regulatory agency, has been looking into banning the “nudification tools” altogether and has been in discussion with X. The ban would include possible criminal charges and fines for those supplying the tools. Zooming out to the EU, they have been reviewing X under their Digital Services Act and have previously fined X €120 million for DSA breaches. This is a strong signal from some agencies around the world that there is a willingness for accountability. Unfortunately, both past and present US administrations have been less keen on regulating the big AI companies which has set up the environment for the X situation since X is based in the US. One step in the right direction for the US is when the Take It Down Act was signed into law in early 2025, criminalizing similar themes of the creation and distribution of AI generated sexual content on an unconsenting individual.
The Deepfake Spillover in Schools and Youth
The misuse of AI has been enabled by platforms such as X to go beyond those platforms and into schools which have been forced to confront that pattern of students, and sometimes faculty, using the nudify tools on children and adolescents. Stanford released a policy brief in support for AI regulations that documented an emerging pattern of misuse across the country and even internationally. Incidents of misuse range anywhere from some sort of joke, a tool to bully both online and off, and as a tool of coercion with the result always being psychological harm. Since the problem is still new and regulations are still laxed, especially in the US, most schools have been very unprepared. Teachers and faculty severely lack the training on how to deal with such tools and have no systemically enforced protocols to respond appropriately.
Fortunately, many states have criminalized generated CSAM but punitive measures can only do so much and that gap is felt in educators. This policy gap can create an environment where schools may be hesitant to act or even over react and handle incidents without an appropriate and consistent level of care.
While expanding criminal statutes is productive to an extent, having no operational guidance for those who regularly deal with these populations risks confusion rather than prevention. Effective prevention doesn't just happen, it requires clear guidelines and pathways to provide proportional responses to address the harm while reducing recurrence.
Prevention
If AI generated predation is on the rise and complicated, then prevention has to be layered. AI is a tool and, like most tools, can be used productively as well as be used for harm. For such a tool, efforts for prevention in terms of structure, education, and regulation must all align.
Structurally
At a platform level it’s quite simple - implement actual safeguards. There should be stronger refusal systems for sexually explicit prompts and better moderation site wide. The platform could also use this misuse to cull these predatory coaching communities through the use of vast IP bans. The data used to train these AI models must also have dataset auditing and auditing should be standard practice. Training data should be reviewed either by some sort of government agency, I would recommend the FBI due to their CSAM database, or at the very least a third party to sift and remove illegal/exploitative content.
FOR PARENTS
While basic digital literacy for educators and caregivers has improved, it still lags behind the rapid evolution of our technology. Digital literacy education for caregivers and educators should include discussions of AI tools and image manipulation through the lens of prevention. Institutions and organizations could implement reporting channels for both faculty and students to report nonconsensual AI content to work alongside with accountability frameworks. Such frameworks must also age appropriate in their responses. Often adolescents act without full knowledge or the impact of their actions, especially legally, and should be focused on behavioral corrections and acknowledgment of the harm instead of separation and punishment. While basic digital literacy for educators and caregivers has improved, it still lags behind the rapid evolution of our technology. Digital literacy education for caregivers and educators should include discussions of AI tools and image manipulation through the lens of prevention. Institutions and organizations could implement reporting channels for both faculty and students to report nonconsensual AI content to work alongside with accountability frameworks. Such frameworks must also age appropriate in their responses. Often adolescents act without full knowledge or the impact of their actions, especially legally, and should be focused on behavioral corrections and acknowledgment of the harm instead of separation and punishment. Application for this is to be present in your child's online presence. Children deserve privacy, but their online actions shouldn't be secret - it puts children at so much risk and opportunity to act within those impulses.
Regulatory
Finally, regulatory bodies must take action to provide a clear foundation. Legislatures should clarify mandatory reporting to include AI content as well as updating bullying/cyberbullying discussions with AI content. Reactive AI bans are insufficient and to have effective and sustainable prevention requires clear, evidence based standards that give clear guidelines for individuals all the way up to the platforms themselves.
AI as we know it is here and will likely stay and evolve. The Grok issue has just shown us the weakness in our current framework and it’s crucial for us to respond. Effective prevention requires more than reacting, it demands accountability from platforms and clear guidelines for our institutions for informed responses for youth.
Sources:

Comments