AI: Why Did AI Suddenly Take Issue with 'Pigeons'? These Words Banned from Coding Tools..
Shikha Saxena May 02, 2026 02:15 PM

OpenAI has imposed a peculiar restriction on its new AI agent, 'Codex.' This AI tool will no longer discuss 'goblins' or other mythical creatures. Codex was, in fact, designed to compete with Anthropic's Claude Code AI Agent.' It is capable of not only writing code but also executing it via a Command-Line Interface (CLI). However, with its latest update, the company has incorporated some extremely strict—and rather odd—rules into Codex's system.

According to the new instructions issued for GPT-5.5 within the Codex CLI, the model has been explicitly directed never to mention goblins, gremlins, raccoons, trolls, ogres, pigeons, or any other creatures—unless the user's query makes the inclusion of such entities necessary. This restriction is reiterated multiple times throughout a lengthy instruction document spanning approximately 3,500 words. The document not only prohibits the mention of these creatures but also forbids the execution of commands that could potentially harm the system, as well as the gratuitous use of emojis.

So, what exactly is OpenAI's issue with 'goblins'?
The reasoning behind banning references to monsters and pigeons in a coding tool—however bizarre it may sound—is actually quite simple. OpenAI discovered that its new models had begun mentioning these creatures gratuitously and without provocation, even in response to highly serious user inquiries.

In a blog post shedding light on this mystery, the company explained that—inadvertently—the system had assigned higher "rewards" (positive ratings) to responses and examples that featured references to these specific creatures. It was this feedback loop that prompted the AI ​​to begin overusing these terms. To put it simply, during its training, the AI ​​picked up a bad habit—one that became increasingly ingrained over time. The situation escalated to the point where, following a specific update, the AI's usage of the word "goblin" surged by 175%. A particular "nerdy" mode within the AI ​​further exacerbated this issue, as this mode was known for generating highly humorous and eccentric examples.

The problem, however, was not confined to that single mode. Due to the nature of the AI's training methodology, this peculiar behavior began to seep into its standard responses as well. OpenAI described this phenomenon as a "feedback loop." Essentially, once the AI ​​received positive reinforcement—even for an incorrect action—it would proceed to repeat that behavior incessantly.

What does this mean for Codex users?
While the general public might find this amusing, for those utilizing the AI ​​for serious coding or professional tasks, the intermittent appearance of references to "goblins" or "pigeons" can be highly distracting. To address this very issue, strict guidelines regarding language usage and operational protocols have now been incorporated into Codex.

For instance, the AI ​​has now been issued strict directives to refrain from executing any potentially hazardous commands—such as deleting files—unless explicitly instructed to do so by the user. The company's overarching objective is to render the AI ​​more reliable and secure.

"Goblin Mode" Becomes an Internet Meme
Interestingly, this entire episode has evolved into an internet meme. Some users reported that the AI ​​was referring to bugs (glitches) in their software as "gremlins," while others began jokingly mocking the existence of a "Goblin Mode" within their coding tools.

OpenAI has now successfully eradicated the root cause of this problem. They have removed the training signals that prompted the AI ​​to use such language and have also begun filtering out such nonsensical terms. However, since work on GPT-5.5 was already underway, these additional rules were implemented as a precautionary measure.

The company states that this incident serves as compelling evidence of how even minor decisions made during the training phase can yield highly unpredictable results. OpenAI remarked, "This 'goblin' episode is a prime example of how a single reward signal can alter a model's behavior in ways we never anticipated." All in all, while AI tools may be growing smarter at handling everyday tasks, it sometimes remains necessary to remind them that one should not be discussing monsters and pigeons in the middle of work.

Disclaimer: This content has been sourced and edited from Amar Ujala. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.

© Copyright @2026 LIDEA. All Rights Reserved.