Google AI has a quirky habit of inventing idioms
NewsBytes April 25, 2025 06:39 PM


Google AI has a quirky habit of inventing idioms
25 Apr 2025


Google's AI Overviews feature has been seen generating explanations for made-up idioms.

When you enter a random phrase and add "meaning" to it, the AI confidently presents it as real sayings, with explanations and origins.

For example, it defined "a loose dog won't surf" as an expression to indicate something is unlikely to occur or succeed.

Similarly, it interpreted "wired is as wired does" in relation to behavior and inherent nature.


AI's confidence can mislead users
Misleading results


The AI's confident delivery and provision of reference links can create an impression that these are common phrases.

This issue highlights a limitation in generative AI technology.

The system sometimes incorrectly identifies invented phrases as widely recognized idioms, such as interpreting "never throw a poodle at a pig" as a proverb with biblical origins.


Relying on vast training data
AI's limitations


Google employs experimental generative AI to generate these results.

Essentially, the tech is a probability machine that predicts the next word on the basis of a lot of training data, Johns Hopkins University computer scientist Ziang Xiao explained.

However, this approach doesn't always yield the right answer.

Research shows AI aims to please and chatbots often tell users what they want to hear, misinterpreting phrases or reflecting biases back at them.


AI's reluctance to admit ignorance
Information fabrication


AI systems are usually hesitant to acknowledge when they don't know an answer. In such scenarios, they may even make up information.

"When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available," said Google spokesperson Meghann Farnsworth.

She added that this is true for search overall, and sometimes AI Overviews will also trigger in an effort to provide helpful context.


Google's AI Overviews results vary
Inconsistency noted


The generation of an AI Overviews result isn't guaranteed for every query.

Cognitive scientist Gary Marcus, author of Taming Silicon Valley: How We Can Ensure That AI Works for Us, discovered this inconsistency after five minutes of experimentation.

He said that "it's wildly inconsistent, and that's what you expect of GenAI, which is very dependent on specific examples in training sets and not very abstract."

© Copyright @2025 LIDEA. All Rights Reserved.