Google Gemini: If you use Google's AI assistant Gemini, you need to be vigilant.
If you use Google's AI assistant Gemini, you need to be vigilant. A significant security warning regarding Gemini has recently emerged, raising concerns about user privacy. Google had added features to Gemini, such as access to Calendar, to make it easier for users to manage meetings and schedules, but this very feature now appears to be creating a new vulnerability for hackers.
With Calendar access, Gemini can provide users with information about their appointments, free slots, and upcoming events. At first glance, this feature seems incredibly useful, as it eliminates the need to repeatedly open the calendar. However, security experts say that when AI is given such deep access, the risks also increase. Gemini's ability to understand language and context can be exploited.
Researchers at the cybersecurity firm Miggo Security reported that hackers were using a specific technique called Indirect Prompt Injection. In this technique, the user is sent a seemingly innocuous Google Calendar invite. It looks completely normal, but its description contains hidden instructions that are understood not by humans, but by the AI. These instructions are not like code, but are written in plain language, making it easy for Gemini to be misled.
When the user asks Gemini whether they are free on a particular day or time, the AI scans the entire calendar. In this process, it accesses the suspicious invite containing the hidden instructions. Gemini then summarizes the meetings and events and creates a new calendar event. From the outside, everything looks completely normal, but during this process, the user's private information can be silently exposed.
After discovering this vulnerability, Miggo Security alerted Google's security team. After the investigation, Google acknowledged the vulnerability and has since fixed it. Experts say this case is a major lesson, as AI-related threats are no longer limited to just code. After discovering this vulnerability, Miggo Security alerted Google's security team. Following an investigation, Google acknowledged the weakness and subsequently fixed it. Experts say this incident serves as a crucial lesson, as AI-related threats are no longer limited to just code.