The recent controversy around artificial intelligence (AI) assistant Grok has once again brought the uncomfortable question of NSFW (not safe for work) content generated by AI into focus.
Grok is facing serious scrutiny after previously enabling users to generate sexually explicit visual content involving real individuals, raising concerns around personal safety, dignity, privacy, and consent.
While Elon Musk-owned xAI has since rolled back access, restricting image and video generation to its paid subscribers, the incident has sparked a larger question about AI chatbots generating sexually explicit content.
(While this article focuses exclusively on sexually explicit material, NSFW content also extends to material involving graphic violence or other sensitive content that is unsafe for workplace consumption.)
Why is NSFW content so critical for AI platforms like Grok?
In the summer of 2025, parent firm xAI launched the Grok’s Companions feature, including AI anime ‘waifu’ companions capable of stripping to lingerie and describing sexual situations.
The ‘spicy’ mode in Grok Imagine, the company’s AI video generator unveiled by Musk last August, has been capable of producing borderline pornographic imagery.
NSFW content thus gave Grok quicker discoverability, positioning xAI apart from competitors such as OpenAI and Google, which impose stricter limits on erotic content. This can be partially substantiated by the growth reported by the platform during that period. A Business Insider report in August noted that the introduction of AI companions led to a 41% increase in Grok’s global downloads. According to a TechCrunch report, the company’s iOS gross revenue surged 325% by July 11, rising to $419,000 from $99,000 the day before, as per app intelligence firm Appfigures. This revenue growth was attributed to not just these feature additions but also to the launch of Grok 4, xAI’s latest model.
By allowing NSFW content, Grok was able to attract attention and to differentiate itself in an already crowded market where rival AI chatbots were outperforming it on other fronts.
Also Read: Grok limits image generation to paid users; not enough, says govt
Is Grok the only AI platform capable of generating NSFW content?
No, other platforms are also technically capable of generating such content.
In fact companies such as Meta and OpenAI had come under scrutiny last year for sexual conversations involving minors. Reports by Reuters and The Wall Street Journal had found Meta’s AI bots engaging in sensual conversations with underage users. Similarly, ChatGPT was also found to be capable of being used to generate graphic erotica and engage in sexually explicit conversations even when accounts were registered as belonging to minors aged between 13 and 17.
Last October, OpenAI CEO Sam Altman had also written in a post on X, “in December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.”
However, the usage policies of such platforms have largely prevented them from escalating into Grok-like controversies. Following the backlash to his October 14 post, the OpenAI CEO issued a detailed clarification, stating that erotica would not be the sole focus of the approach and that it would be optional and available only on demand to users.
Also Read: EU orders Musk's Grok AI to keep data after nudes outcry
Is NSFW in AI chatbots financially attractive?
While the Grok example alone is not sufficient to conclusively establish this, there is an intuitive economic logic underpinning the rise of allowing NSFW content within these AI assistants.
Adult entertainment is a multi-billion-dollar industry that, beyond revenue, guarantees strong user stickiness through high and sustained engagement. Major distributors, most of them staying private, have reportedly been generating billions of dollars in revenue while sharing a portion with creators.
With AI entering this space, the dependence on creators diminishes, margins increase, and content becomes more personalised, potentially driving even higher levels of engagement. A July 2025 investigation by Wired revealed that AI nudifier tools are reportedly earning as much as $36 million annually.
Even OpenAI co-founder and CEO Sam Altman acknowledged in a podcast with influencer Cleo Abram that such content in the short-term could “juice growth or revenue”, and that the temptation exists because such features significantly increase time spent.
Is it just about revenue or is there an issue with the models as well?
AI chatbots generating vulgar or offensive content is not merely a problem of values or ideals but also a fundamental technical issue. Modern large language models (LLMs) are trained to be sycophantic—they are optimised to be agreeable, compliant, and engaging, which can cause them to prioritise user satisfaction over broader social or moral considerations, particularly in extended conversational settings.
When combined with weak age-gating, engagement-driven optimisation, and ambiguous safety policies, this creates an environment where NSFW interactions, sometimes involving minors, can emerge even without explicit intent from companies.
The real question is not whether such AI assistants will continue to emerge, but whether companies and regulators can establish safeguards that meaningfully address consent, age verification, legality, and accountability before commercial incentives push the technology ahead of governance.
Grok is facing serious scrutiny after previously enabling users to generate sexually explicit visual content involving real individuals, raising concerns around personal safety, dignity, privacy, and consent.
While Elon Musk-owned xAI has since rolled back access, restricting image and video generation to its paid subscribers, the incident has sparked a larger question about AI chatbots generating sexually explicit content.
(While this article focuses exclusively on sexually explicit material, NSFW content also extends to material involving graphic violence or other sensitive content that is unsafe for workplace consumption.)
Why is NSFW content so critical for AI platforms like Grok?
In the summer of 2025, parent firm xAI launched the Grok’s Companions feature, including AI anime ‘waifu’ companions capable of stripping to lingerie and describing sexual situations.
The ‘spicy’ mode in Grok Imagine, the company’s AI video generator unveiled by Musk last August, has been capable of producing borderline pornographic imagery.
NSFW content thus gave Grok quicker discoverability, positioning xAI apart from competitors such as OpenAI and Google, which impose stricter limits on erotic content. This can be partially substantiated by the growth reported by the platform during that period. A Business Insider report in August noted that the introduction of AI companions led to a 41% increase in Grok’s global downloads. According to a TechCrunch report, the company’s iOS gross revenue surged 325% by July 11, rising to $419,000 from $99,000 the day before, as per app intelligence firm Appfigures. This revenue growth was attributed to not just these feature additions but also to the launch of Grok 4, xAI’s latest model.
By allowing NSFW content, Grok was able to attract attention and to differentiate itself in an already crowded market where rival AI chatbots were outperforming it on other fronts.
Also Read: Grok limits image generation to paid users; not enough, says govt
Is Grok the only AI platform capable of generating NSFW content?
No, other platforms are also technically capable of generating such content.
In fact companies such as Meta and OpenAI had come under scrutiny last year for sexual conversations involving minors. Reports by Reuters and The Wall Street Journal had found Meta’s AI bots engaging in sensual conversations with underage users. Similarly, ChatGPT was also found to be capable of being used to generate graphic erotica and engage in sexually explicit conversations even when accounts were registered as belonging to minors aged between 13 and 17.
Last October, OpenAI CEO Sam Altman had also written in a post on X, “in December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.”
However, the usage policies of such platforms have largely prevented them from escalating into Grok-like controversies. Following the backlash to his October 14 post, the OpenAI CEO issued a detailed clarification, stating that erotica would not be the sole focus of the approach and that it would be optional and available only on demand to users.
Also Read: EU orders Musk's Grok AI to keep data after nudes outcry
Is NSFW in AI chatbots financially attractive?
While the Grok example alone is not sufficient to conclusively establish this, there is an intuitive economic logic underpinning the rise of allowing NSFW content within these AI assistants.
Adult entertainment is a multi-billion-dollar industry that, beyond revenue, guarantees strong user stickiness through high and sustained engagement. Major distributors, most of them staying private, have reportedly been generating billions of dollars in revenue while sharing a portion with creators.
With AI entering this space, the dependence on creators diminishes, margins increase, and content becomes more personalised, potentially driving even higher levels of engagement. A July 2025 investigation by Wired revealed that AI nudifier tools are reportedly earning as much as $36 million annually.
Even OpenAI co-founder and CEO Sam Altman acknowledged in a podcast with influencer Cleo Abram that such content in the short-term could “juice growth or revenue”, and that the temptation exists because such features significantly increase time spent.
Is it just about revenue or is there an issue with the models as well?
AI chatbots generating vulgar or offensive content is not merely a problem of values or ideals but also a fundamental technical issue. Modern large language models (LLMs) are trained to be sycophantic—they are optimised to be agreeable, compliant, and engaging, which can cause them to prioritise user satisfaction over broader social or moral considerations, particularly in extended conversational settings.
When combined with weak age-gating, engagement-driven optimisation, and ambiguous safety policies, this creates an environment where NSFW interactions, sometimes involving minors, can emerge even without explicit intent from companies.
The real question is not whether such AI assistants will continue to emerge, but whether companies and regulators can establish safeguards that meaningfully address consent, age verification, legality, and accountability before commercial incentives push the technology ahead of governance.







