New delhi: Google on Tuesday, Introduced Gemma 3n on Its I/O Event on Tuesday. It is a new ai model designed to run smoothly on everyday devices like phones, laptops and tablets. The model is now available in preview and can understand and process audio, text, images and even videos.
Gemini usually needs an internet connection as most of its tasks are processed in the cloud especially the more complex only. For that who prefer on-device ai, there’s gemini nano, which is made to handle tasks directly on smartphones. And now, there’s another option: Gemma 3n. Announced at Google I/O 2025, Gemma 3n is Built on the Same Technology as Gemini Nano but is Open-Source and Designed to Run Smoothly on Phones, Laptops, and Tablets with Cloud.
The new system has been created in partnership with Qualcomm, MediaTek and Samsung. It has been designed to deliver fast, efficient and private ai experiences right on your device. The Gemma 3n Model, Built on This Tech, Runs Smoothly With Just 2GB to 3GB of Ram and is surprisingly Quick. It performs on par with top ai models like anthropic’s claude 3.7 sonnet, based on recent chatbot arena rankings.
Gemma 3n is a multimodal ai model, which means it can undersrstand text, voice, and even images – whether they’re on your screen or coming from your phone’s camera in real time. It can read text, translate languages, solve math problems, and answer complex questions on the spot. While The Experience Feels Similar to What Gemini and Gemini Live Offer on Mobile, The Key Difference is that that Gemma 3n is Built to Run Directly on your device. It’s not a standalone google app, but a model that developers can integrate into apps or operating systems.
Google says gemma 3n is very efficient, running smoothly with only 2 to 3GB of Ram. It’s also faster than many ai models, bot proprietary and open-source. In Fact, its performance ranks close to anthropic’s claude 3.7 sonnet, according to chatbot arena scores.