Digital Forensics Expert Shares An Easy Test To Tell If Something Is AI
Samira Vishwas May 11, 2026 06:24 AM

As the technology has continued to improve, at times it can be nearly impossible to distinguish an AI-generated image or video from the real thing. Advancements have happened so quickly that even trained observers can be fooled, much less regular people.

However, one expert believes there is a foolproof way to tell when something was created by AI, and it’s extremely obvious once you notice it. That means you never have to wonder when looking at a friend’s social media post or a video on YouTube, and wonder, as we all did with the jumping bunnies, if it’s real or not.

A digital forensics expert says there’s one easy test that tells you if something was created by artificial intelligence.

Hany Farid, a leading expert at the University of California, Berkeley, is often called upon by journalists and news outlets to verify whether a photo or video has been manipulated. Colleagues consider him the “dean of digital forensics,” since he helped to found the field over 20 years ago. He’s extremely skilled at finding signs of Photoshop, but the advent of AI has forced him to rethink his usual methods.

Ground Picture | Shutterstock

Initially, AI-generated media was easy to spot. They often included unrealistic sensor noise and imperfections. Now, AI models have learned to reproduce these patterns, making images and videos look far more realistic, especially to the untrained eye. Farid can no longer rely on the pixel-level statistical relationship detection methods he previously used for manipulated images.

To change his approach, he had to shift his thinking. “One of the things that I wanted to understand was: When somebody creates a fake, what will they not notice?” he shared. Well, according to previous research he conducted, it turns out that people are generally bad at judging geometry in photos or videos.

: Denmark Fights AI Deepfakes By Allowing Citizens To Copyright Their Faces & Identities

Farid says that the geometry of the scene in the picture or video is an obvious tell of AI generation.

“Generative AI doesn’t know about physics, doesn’t know about geometry, and it does all kinds of crazy [things],” Farid explained. Image models often leave geometric inconsistencies because they imitate visual patterns without fully understanding the physical rules of three-dimensional space.

smiling woman holding camera outside understand the geometry of images WellStock | Shutterstock

In authentic photos, there are strict rules for perspective. For example, lines that run parallel in reality (like tiles on the floor) should all meet in a single vanishing point. If they don’t all intersect at one point, like in the images in this Reddit post, the image or video is likely not real. The same is true for reflections, as certain points on an item are always parallel with its reflection in a mirror, so they should similarly meet at one vanishing point.

Lighting also plays a big role in determining whether a piece of media is real or not. Shadows and highlights in real life are consistent based on the position of the light sources and the camera. AI isn’t always able to correctly place shadows and reflections on objects based on these rules.

There’s no way around it: AI is here to stay. It holds incredible opportunities for future applications, but the key is using it responsibly and truthfully. While it will only continue to improve and learn more over time, everyone would benefit from knowing the tricks and tips for catching AI-generated media.

: Everyone’s Questioning The Plan For ‘Regular People’ Once AI Takes All The Jobs & No One Can Afford Anything

Kayla Asbach is a writer with a bachelor’s degree from the University of Central Florida. She covers relationships, psychology, self-help, pop culture, and human interest topics.

© Copyright @2026 LIDEA. All Rights Reserved.