Google has introduced its Safety Charter in India to expand AI-driven fraud detection and combat scams. This initiative targets India, Google’s largest market outside the U.S., where digital fraud is increasing.
The Indian government’s data indicates that fraud related to the Unified Payments Interface (UPI) rose 85% year-over-year, reaching nearly 11 billion Indian rupees ($127 million) last year. The rise of digital arrest scams, where fraudsters pose as officials to extort money, also highlights the problem.
To address these issues, Google has launched a security engineering center in India (GSec), joining existing centers in Dublin, Munich, and Malaga.
Announced at the Google for India summit last year, GSec aims to collaborate with local entities, including government, academia, SMEs, and students, to develop solutions for cybersecurity, privacy, safety, and AI challenges. This was stated by Google VP of security engineering Heather Adkins in an interview with TechCrunch.
Google stated in a blog post that it has partnered with the Ministry of Home Affairs’ Indian Cyber Crime Coordination Centre (I4C) to increase cybercrime awareness. This partnership builds on existing programs such as DigiKavach, launched in 2023 to mitigate harmful effects from malicious financial and predatory loan apps.
According to Adkins, GSec in India will focus on online scams and fraud, enterprise and governmental cybersecurity, and responsible AI development. Google aims to utilize its engineering capabilities in India to address specific local challenges. Adkins stated, “These three areas will become part of our safety charter for India, and over the coming years… we want to use the fact that we have engineering capability here to solve for what’s happening in India, close to where the users are.”
Globally, Google uses AI to combat online scams, removing millions of ads and ad accounts. The company intends to expand AI deployment in India to further combat digital fraud.
Google Messages uses AI-powered Scam Detection, protecting users from over 500 million suspicious messages monthly. Last year, Google piloted Play Protect in India, which blocked nearly 60 million attempts to install high-risk apps, stopping more than 220,000 unique apps on over 13 million devices. Google Pay displayed 41 million warnings against potentially fraudulent transactions.
Adkins, a founding member of Google’s security team, discussed the misuse of AI tools, noting the potential for malicious actors to leverage AI. She noted the risk of threat actors adapting deepfakes, images, and translation utilities to refine phishing scams. Adkins said, “We’re obviously tracking AI very closely, and up until now, we’ve mostly seen the large language models like Gemini used as productivity enhancements. For example, to make phishing scams a bit more effective — especially if the actor and the target have different languages — they can use the benefit of translation to make the scams more believable using deepfakes, images, video, etc.”
Google is conducting extensive AI model testing to ensure safe functionality and is developing frameworks like the Secure AI Framework to prevent misuse of Gemini models, according to Adkins. She stated, “This is important for generated content that might be harmful, but also actions that it might take.”
Adkins expressed concern about the rapid pace of protocol deployment within the industry and emphasized the need for safety considerations to be integrated early in the development process. Adkins stated, “The industry is moving very, very quickly [by] putting protocols out. It’s almost like the early days of the internet, where everybody’s releasing code in real time, and we’re thinking about safety after the fact.”
Google is collaborating with the research community and developers rather than imposing its own frameworks to limit potential abuses of generative AI, according to Adkins. She said, “One of the things you don’t want to do is constrain yourself too much in the early research days.”
In addition to the risks posed by generative AI, Adkins identified commercial surveillance vendors as a major threat, including spyware developers. She stated, “These are companies spun up all over the world, and they develop and make and sell a platform for hacking. You might pay $20, you might pay $200,000, just depending on the sophistication of the platform, and it allows you to scale attacking people without any expertise on your own.”
Adkins said that India faces unique challenges due to its size, including AI-driven fraud and digital arrest scams. She also stated, “You can see how quickly the threat actors themselves are advancing… I love studying cyber in this region because of that. It’s often a hint of what we’re going to see worldwide at some point.”
Regarding authentication, Adkins acknowledged the difficulty of transitioning away from passwords in a diverse market like India, despite the known security vulnerabilities of passwords. Adkins stated, “We knew for a very long time that passwords were not secure. This concept of a multi-factor authentication was a step forward,” adding that Indians likely favor SMS-based authentication.