Fake PAN and Aadhaar Cards Generated Using Google’s Gemini Nano Tool Spark Major Security Concerns
Siddhi Jain November 27, 2025 01:15 PM

Techie Exposes AI Risk: Fake PAN and Aadhaar Cards Created Using Google Gemini Nano—Raises Alarming Security Concerns

A tech professional from Bengaluru has raised serious alarms regarding the rising threat of identity document forgery using Artificial Intelligence tools. The concern surfaced after the techie demonstrated how Google’s Gemini Nano AI tool, also known as “Nano Banana”, can create highly realistic fake PAN and Aadhaar cards—difficult to distinguish from genuine ones. The warning has triggered discussions around data security, digital fraud and the urgent need for stronger verification systems.

The discovery was shared by Bengaluru-based techie Harveen Singh Chaddha, who posted images online showing fake identity cards generated using the AI tool. The AI-created documents, belonging to a fictional person named “Twitterpreet Singh”, appeared almost real, raising immediate red flags about the potential misuse of advanced AI tools.

How Gemini Nano Creates Near-Perfect Fake Identity Cards

The core concern highlighted is that Gemini Nano Pro, a version of the AI model, can recreate government identity cards including Aadhaar, PAN, and even passport-style photos with surprising precision. The AI-generated samples look so authentic that they can easily trick people at first glance, making fraud detection extremely difficult without specialized tools.

Cybersecurity experts fear that such technology can be exploited by scammers for:

  • Digital identity theft

  • Financial fraud using forged documents

  • Illegal SIM card activation

  • KYC verification bypass

  • Bank account or credit card fraud

According to experts, the threat is significant because verification systems may fail to detect minor yet critical differences in forged documents.

Visible and Invisible Watermarking May Not Be Enough

All AI-generated images from Google’s Gemini currently include a visible watermark in the bottom-right corner. This is meant to identify AI-created content. Additionally, Google has introduced SynthID, an invisible digital watermark developed by DeepMind. This is embedded into the image’s pixels to help detect whether an image was created using AI.

However, Harveen points out two major loopholes:

  1. Visible watermark can easily be removed

    • Using tools like Photoshop, fraudsters can erase the watermark without leaving noticeable traces.

  2. SynthID has limitations

    • It currently verifies content only generated through Google Gemini

    • It cannot detect images produced by other tools like ChatGPT, Midjourney, or Stable Diffusion, leaving a wide gap in protection.

ChatGPT Also Used for Fake Image Generation

While concerns initially focused on Gemini Nano, Harveen notes that this trend is not exclusive to Google. Tools such as ChatGPT were among the first to enable realistic synthetic image generation through prompt input. With improvements in AI, fake document creation has become increasingly simple and accurate.

Compared to early models, Gemini Nano and Nano Pro create significantly more realistic results, making the issue more pressing than ever.

Not Fear, But Awareness: Message from the Techie

Explaining the purpose behind his demonstration, Harveen Singh Chaddha said the intention was not to create panic, but to create awareness about rapidly advancing AI tools and the risks they pose if not properly regulated.

“The goal was not to scare people, but to show what today’s AI models can achieve. They work incredibly fast and with fewer errors compared to older technologies. As these tools become more powerful, verification systems must also evolve,” he said.

He emphasized that governments, banks, and digital service providers need to upgrade security frameworks to match the pace of technological advancement.

Conclusion

The incident highlights a critical issue: while AI is transforming industries with revolutionary benefits, it also opens doors to misuse that could threaten identity safety and financial security. The ability to fabricate authentic-looking identity documents raises a serious challenge for law enforcement, cybersecurity professionals, and regulatory authorities.

As AI becomes more sophisticated, stronger digital authentication systems, improved watermarks, and real-time verification tools are crucial to prevent fraud and maintain trust in digital services.

© Copyright @2025 LIDEA. All Rights Reserved.