Artificial intelligence has become a powerful tool for businesses, enhancing efficiency and productivity across industries. However, a recent report from Google’s Threat Intelligence Group reveals that state-backed hackers from Iran, China, and North Korea are also leveraging AI, particularly Google’s Gemini chatbot, to improve their cyber operations.
The report highlights that while these hackers have not yet developed groundbreaking new capabilities, they are significantly increasing their efficiency and speed. This raises concerns about the future risks of AI in cybercrime, as cybercriminals continue to explore new ways to exploit AI-driven tools.
Google’s investigation found that state-sponsored hackers were using Gemini for tasks that enhance their cyber activities. These include:
While Gemini’s built-in safeguards prevent hackers from conducting directly malicious activities, cybercriminals are still finding ways to use the chatbot for indirect assistance in their attacks. The report stresses that generative AI is not yet a game-changer for cybercriminals, but it is helping them work faster and at a higher scale.
Google’s findings indicate that each country is using AI differently in its cyber operations. The primary users of Gemini for hacking activities are Iranian cybercriminals, followed by Chinese and North Korean hackers.
Iranian hackers appear to be the most active users of Gemini, employing it for:
Phishing campaigns remain a primary attack strategy for Iranian hackers, and AI tools like Gemini help them create more convincing and deceptive emails. This increases the likelihood of victims falling for scams and revealing sensitive information.
Chinese cybercriminals, on the other hand, are focused on using AI to improve their technical skills. According to Google, Chinese hackers are using Gemini to:
China is one of the most active players in cyber espionage, often targeting government agencies, corporations, and critical infrastructure. AI tools allow them to analyze vulnerabilities more efficiently, making their attacks more precise and difficult to detect.
North Korean cybercriminals have taken a different approach, using Gemini to generate fake identities and infiltrate businesses. Google’s report reveals that North Korean actors are using AI to:
This tactic is part of a larger North Korean scheme in which government-backed hackers pose as remote workers in US and Western companies. Once hired, they gain access to internal systems and can steal data or deploy malware.
Despite the concerning use of Gemini by hackers, Google emphasizes that current AI technology is not enabling any groundbreaking cyberattacks. The report states that large language models (LLMs) alone are unlikely to create entirely new hacking methods, but they can accelerate and amplify existing tactics.
Google has also assured that Gemini’s safeguards are preventing hackers from using it for highly sophisticated cybercrimes. The company has built-in restrictions to block harmful queries and prevent misuse of its AI systems.
Cybersecurity analysts have long warned that AI could be used to enhance cyber threats, particularly in areas such as phishing, social engineering, and disinformation campaigns. A recent report from the UK’s National Cyber Security Centre echoed Google’s concerns, stating that while AI will increase the volume and impact of cyberattacks, its overall effect will remain uneven across different threat actors.
As AI technology advances, its role in cybersecurity—both defensive and offensive—will continue to evolve. While tools like Gemini can improve efficiency in legitimate business applications, they are also being misused by state-backed hackers to enhance cyber operations.
Although current AI safeguards are preventing major breakthroughs in cybercrime, experts warn that future developments in AI could change the game. As a result, governments, tech companies, and cybersecurity experts must work together to ensure AI remains a force for good and not a tool for cybercriminals.