Researchers say they have demonstrated a potential method to extract artificial intelligence (AI) models by capturing electromagnetic signals from computers, claiming accuracy rates above 99%.
The discovery could pose challenges for commercial AI development, where companies like OpenAI, Anthropic and Google have invested heavily in proprietary models. However, experts say that the real-world implications and defenses against such techniques remain unclear.
“AI theft isn’t just about losing the model,” Lars Nyman, chief marketing officer at CUDO Compute, told PYMNTS. “It’s the potential cascading damage, i.e. competitors piggybacking off years of R&D, regulators investigating mishandling of sensitive IP, lawsuits from clients who suddenly realize your AI ‘uniqueness’ isn’t so unique. If anything, this theft insurance trend might pave the way for standardized audits, akin to SOC 2 or ISO certifications, to separate the secure players from the reckless.”
Hackers targeting AI models pose a growing threat to commerce as businesses rely on AI for competitive advantage. Recent reports reveal thousands of malicious files have been uploaded to Hugging Face, a key repository for AI tools, jeopardizing models used in industries like retail, logistics and finance.
National security experts caution that weak security measures risk exposing proprietary systems to theft, as seen in the OpenAI breach. Stolen AI models can be reverse-engineered or sold, undercutting businesses’ investments and eroding trust, while enabling competitors to leapfrog innovation.
An AI model is a mathematical system trained on data to recognize patterns and make decisions, like a recipe that tells a computer how to accomplish specific tasks like identifying objects in photos or writing text.
AI Models ExposedNorth Carolina State University researchers have shown a new way to extract AI models by capturing electromagnetic signals from processing hardware, achieving up to 99.91% accuracy. By placing a probe near a Google Edge Tensor Processing Unit (TPU), they could analyze signals that revealed critical information about the model’s structure.
The attack supposedly doesn’t require direct access to the system, posing a security risk for AI intellectual property. The findings emphasize the need for improved safeguards as AI technologies are used in commercial and critical systems.
“AI models are valuable, we don’t want people to steal them,” Aydin Aysu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University, said in a blog post. “Building a model is expensive and requires significant computing sources. But just as importantly, when a model is leaked, or stolen, the model also becomes more vulnerable to attacks — because third parties can study the model and identify any weaknesses.”
AI Signal Security GapThe susceptibility of AI models to attacks could compel businesses to rethink the use of some devices for AI processing, tech adviser Suriel Arellano told PYMNTS.
“Companies might move toward more centralized and secure computing or consider less theft-prone alternative technologies,” he added. “That’s a potential scenario. But the much more likely outcome is that companies which derive significant benefits from AI and work in public settings will invest heavily in improved security.”
Despite the risks of theft, AI also helping increase security. As PYMNTS previously reported, artificial intelligence is strengthening cybersecurity by enabling automated threat detection and streamlined incident response through pattern recognition and data analysis. AI-powered security tools can both identify potential threats and learn from each encounter, according to Lenovo CTO Timothy E. Bates, who highlighted how machine learning systems help teams predict and counter emerging attacks.
The post AI Signal Vulnerability May Invite Model Theft appeared first on PYMNTS.com.