Google Gemini Alert: If you use Google Gemini for AI assistance, you should be aware of a major security warning. A significant security vulnerability has been discovered in Google's AI assistant, Gemini. Gemini recently added new features, such as access to Google Calendar, which were touted as productivity enhancers. However, security researchers have now revealed that this very feature could pose a threat to user privacy. Hackers can steal calendar data using a specific technique, without the user even suspecting anything.
How Gemini's New Feature Works
Google Gemini was recently given access to users' Calendar app so it could provide information about meetings, schedules, and free time. This feature seems quite helpful, as it eliminates the need for users to constantly check their calendars. But as the AI gained more access, the security risks also increased. According to researchers, the AI can be manipulated through language and context.
What is Indirect Prompt Injection?
Researchers at Miggo Security explained that hackers were using a technique called Indirect Prompt Injection. In this method, the hacker sends the user a seemingly innocuous Google Calendar invite. Hidden instructions are embedded in the description of this invite, which Gemini can understand. These instructions are not in code, but in plain language, which deceives the AI.
A Simple Question, but Big Consequences
When the user asks Gemini if they are free on a particular day, the AI scans the entire calendar. During this process, it encounters the malicious invite containing the hidden instructions. Gemini then automatically summarizes the user's meetings and creates a new calendar event. On the surface, everything seems normal, but private information has already been leaked.
Google's Response and the Big Lesson
Miggo Security informed Google's security team about this vulnerability. After investigation, Google confirmed the weakness and has fixed it. Researchers say that AI-based features are creating a new type of threat. Now, vulnerabilities may be hidden not only in the code, but also in the language, context, and behavior of the AI. This warning is considered extremely important for future AI systems.
Disclaimer: This content has been sourced and edited from TV9. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.