Google unveiled Gemma 3n, the latest addition to its open AI model family, at Google I/O 2025 on Tuesday. This model is designed to run smoothly on resource-constrained devices such as phones, laptops, and tablets, even those with less than 2GB of RAM, starting in preview from Tuesday.
Gemma 3n’s efficiency allows it to operate offline without cloud computing, reducing costs and preserving privacy by eliminating the need to transfer data to remote servers. Gus Martins, Gemma Product Manager, highlighted that Gemma 3n shares the same architecture as Gemini Nano, emphasizing its engineering for “incredible performance.”
In conjunction with Gemma 3n’s release, Google is introducing MedGemma through its Health AI Developer Foundations program. Described as the company’s most capable open model for analyzing health-related text and images, MedGemma is designed to work seamlessly across various image and text applications, enabling developers to adapt it for their health apps. Martins elaborated, “MedGemma [is] our collection of open models for multimodal [health] text and image understanding… so that developers can adapt the models for their own health apps.”
On the horizon is SignGemma, an open model trained to translate sign language into spoken-language text, with a particular proficiency in American Sign Language (ASL) and English. This model aims to facilitate the creation of new apps and integrations for deaf and hard-of-hearing users. As Martins noted, “SignGemma is… the most capable sign language understanding model ever, and we can’t wait for… developers and deaf and hard-of-hearing communities to take this foundation and build with it.”
Despite Gemma’s popularity, with tens of millions of downloads, the platform has faced criticism over its custom, non-standard licensing terms, which some developers view as a commercial risk. Nonetheless, this hasn’t hindered the models’ widespread adoption.