5 Clues To Look For (And Free Tools That Can Help)
News Update February 12, 2025 09:24 PM





Whether you like it or not, we can expect generative AI to be intruding on our day to day lives for the foreseeable future. Nobody’s making any money on the technology yet, and most of it is clearly not ready for prime time, but ever since the November 2022 launch of ChatGPT, it’s been a juggernaut that has completely poisoned the public discourse.

Advertisement

In today’s world, the average person is already far too likely to immediately believe the last thing they read or hear. Now, users can throw together disturbingly photorealistic images and video, or even AI clones of someone’s voice using largely automated tools that are often available for free or low prices if there’s a paywall. At a time when our phones are constantly inundated by scam phone calls that prey on the elderly, this new technology means that it’s become even easier to pull off scams. Do you think your grandparents could tell if a clone of your voice wasn’t you?

AI-generated text has different problems, as the potential trickery element isn’t nearly as present, but they’re significant problems nonetheless. The large language models (LLMs) that underpin chatbots like ChatGPT are prone to “hallucinating” complete nonsense regardless of how specific a prompt is, resulting in AI horror stories like an LLM inventing details in a court filing it was asked to summarize. Thankfully, there are ways to spot AI-generated text, including free tools, so let’s examine some of them.

Advertisement

AI-generated text detection tools

The easiest way to spot AI-generated text is to use one of the various web-based tools that have been designed to detect it. They’re not perfect: These tools can have false positives and can also easily flag text that was run through assistive AI tools — like Grammarly and others that help writers self-edit their work — as having been created entirely by generative AI tools like ChatGPT. So sometimes, they’re not good enough, while other times, they’re arguably too good. These tools use the predictability of the LLMs based on their training data to try to figure out if the text is AI-generated, meaning it’s mainly looking at the text from a linguistic point of view. As a result, AI detectors could miss other telltale signs, like blatant factual issues that a human author or editor would be expected to catch.

Advertisement

The list of AI detection tools that have received significant attention includes GPTZero, a creation of Princeton University computer science major Edward Tian. At first, its accuracy was hit or miss, but it’s improved in time. Grammarly, the AI-fueled copy editing assistant, also has its own AI detection app. It can be hit or miss, especially with shorter blocks of text, but with the kinds of things it’s important to check the authenticity of, the batting average is fine. GPTZero, in particular, offers more granular analysis, like singling out “AI vocabulary” that commonly appears in AI-generated text.

A complete inability to understand context

The next time you read an article on a website that’s not entirely above board or an established website that is known to have used generative AI to write articles, ask yourself this: Does it read like there are mistakes that a human wouldn’t have made? Are there assertions or implications that don’t mesh with our established reality? Maybe dates that don’t make sense relative to the age of the people being written about? That’s another sign that you’re reading an AI-generated article. The most memorable and high-profile examples of genAI being unable to discern context would probably be the most embarrassing examples of “AI Overview” results in Google Search after the product launched in 2024.

Advertisement

Google’s AI couldn’t discern humor or sarcasm, even while weighing information from less than authoritative sources like Reddit comments or a copied article from The Onion, leading to situations like the AI overview suggesting that you use glue to keep cheese from sliding off a pizza or eat rocks for the nutritional benefits. Even with more authoritative sources, it couldn’t understand that an Oxford University Press book having a chapter titled “Barack Hussein Obama: America’s First Muslim President?” and President Obama not belonging to a specific Christian religious denomination didn’t mean that Obama was actually a Muslim. If you read a very oddly written article and feel like no human being could possibly have authored it, then your instincts are probably correct.

Advertisement

Overly insistent vocabulary using common AI words

Another telltale sign of GenAI in text is that the large language models have a habit of having favorite words and phrases that they use over and over again. In a different time, this might have been seen simply as overly flowery writing. Now, it can often help to determine if a bot wrote something. When it comes to telltale genAI phrases, data scientist Murtaza Haider helpfully broke them down into seven categories in an August 2024 LinkedIn blog post: Contextual connectors, phrasing for uncertainty or generalization, polite and neutral expressions, filler phrases, descriptive and explanatory phrases, formal introductions, and repetitive or stock phrases.

Advertisement

If you read an article that has a lot of phrases that feel like they would have gotten you a bad grade on a high school essay, then there’s a pretty high likelihood that it was written by genAI. “In conclusion,” for example, is a common AI contextual connector, while examples of phrasing for uncertainty or generalization have a clear through-line connecting them. “It is important to note that,” “It can be argued that,” “It is widely recognized that,” “There is evidence to suggest,” and “In many cases” are all notable examples of phrases that genAI seems to use particularly often. As a general rule, if you see an article on a questionable site with a lot of padding that feels very unnatural, you should consider that it was written by genAI.

Advertisement

Researching the author turns up red flags

Another way to discern if an article was written by generative AI is to investigate the author. Some sites have more blatant tells; for instance, CNET doesn’t assign a human byline to AI-generated articles, just a byline like “written by CNET Money,” with a human editor getting an “edited by” credit. Other sites aren’t even that transparent. In November 2023, Futurism broke the story that Sports Illustrated was using generative AI to write articles credited to fake writers. The Futurism article serves as a blueprint for how to investigate an author’s provenance: It opens with the tale of “Drew Ortiz,” an author with no footprint on the internet outside of SportsIllustrated.com and a photo available for sale on Generated.Photos, a website that sells AI-generated headshots.

Advertisement

The “work” of “Drew Ortiz” was also pretty blatantly AI-generated. “Volleyball can be a little tricky to get into, especially without an actual ball to practice with,” begins one curious passage. “You’ll have to drill in the fundamentals in your head before you can really play the game the way it was meant to be played, and for that, you’ll need a dedicated space to practice and a full-sized volleyball.” Parent company The Arena Group confirmed to Futurism that the content was from a contractor, AdVon Commerce, that used GenAI. Now that the door has been opened, it’s led to increased skepticism of new writers at other websites, especially after site acquisitions.

It gets basic facts wrong in ways humans never would

Another way to identify an AI-generated piece of “writing” is to scrutinize the factual statements being made. Even if you don’t know the truth of the matter off the top of your head, the statements of fact in a given article might still be so out there and farfetched that it’s obvious on its face that no human writer with a lick of sense or professional could have written them.

Advertisement

Though never outright confirmed to be created by generative AI, one example that jumps out is an article published by SportsKeeda in March 2024 and later deleted after being scrutinized online. It’s a search engine-optimized piece of work looking to attract readers seeking out information on the marital status of pro wrestling personality Paul Heyman, who’s long kept that side of his life — his kids, his ex-wife, and his divorce — private. “Talking about Paul Heyman’s ex-wife, according to celecrystal.com, Marla Heyman was born in 1991,” reads one passage, while another states that “The two welcomed their first child, a daughter named Azalea in 2002 and a son named Jacob in 2004.” Since SportsKeeda surely wasn’t trying to accuse Paul of anything unsavory, it was evident that a human writing the article was incredibly unlikely.

Advertisement

It’s just common sense: No rational human being would write that a public figure had a child with an 11-year-old girl in such a matter-of-fact way. Only genAI explains that article.



© Copyright @2025 LIDEA. All Rights Reserved.