If you’ve been online over the past week, you’ll have been bombarded with . These include everything from the Barbie Beethoven, to the Barbie chicken and even (terrifyingly) . It all sounds like fun and games – except now an expert is warning that it could be putting users at risk of becoming victims of deepfakes.
Off the back of 4-0’s release, which saw the AI bot gain over one million users within just one day. Unsurprisingly, people have been using the to do what the Internet does best: make memes. Some have sparked controversy. The recent splurge in Ghibli-style images prompted a resurfacing of Spirited Away director Miyazaki’s condemnation of AI art. Now, the latest iteration involves prompting the bot to turn users into their very own Barbie doll.
READ MORE:
It follows on from the innocent real-life trend of cinema-goers posing in life-sized Barbie boxes, which appeared around the release of 2023’s Barbie movie. In many ways, the AI trend has allowed fans to relive the hype and excitement of the box office hit.
It's also allowed users to create amusing images of celebrities and public figures. If you've ever wondered what or Elon Musk would like as a plastic girl's doll, you know longer have to.
However, according to research by the AI prompt management company AIPRM, uploading your data to ChatGPT does more than allow you to relive funny moments – it could be enabling your image to live online in ways you don’t want it to. This is because ChatGPT’s privacy policy collects and stores uploaded images to fine tune its results.
According to Christoph C. Cemper, founder of : “Images shared on AI platforms could be scraped, leaked, or used to create deepfakes, identity theft scams, or impersonations in fake content. You could unknowingly be handing over a digital version of yourself that can be manipulated in ways you never expected.”
What are deepfakes and are you at risk?Deepfakes are images or videos that use AI to mimic a person’s voice or facial features. The concept first entered the public consciousness in 2017, after a Redditor created a subreddit called r/deepfakes, where they used face swapping technology to post fake pornographic videos and images of celebrities.
Since then, deepfakes have been described as a “global crisis” by the European Commission in 2024. The body highlighted how these fake images can be used to convincingly impersonate or misrepresent individuals.
One of the most notable victims of this abuse of AI is pop megastar . In January 2024, sexually explicit AI-generated images of the singer began to circulate on social media, leading to them being viewed millions of times. One of Taylor’s videos was reported 47 million times before being pulled from X, as reported by the .
While deepfakes can be used for any kind of image manipulation, the most common use is pornography. According to a 2023 by Home Security Heroes, pornographic deepfakes form 98% of total deepfake content. Even more concerning, 99% of its targets are women.
Earlier this year, a British man was arrested for using AI images to create pornography of women he knew in real life. They were then posted in a forum glorifying “rape culture”, as reported by the . He did this by pulling images he found on social media.
More troubling still is that deepfakes are a popular search online. According to a recent study by , there are 2,479 searches for deepfakes per million people in the UK in December 2024 – the eleventh highest search volume in Europe.
One way to protect yourself against images being scraped by ChatGPT is to change your privacy settings, Christoph recommends. Users can opt out of ChatGPT's training data collection.