A user needed just a few carefully crafted sentences to override an artificial intelligence system’s core directives, manipulating it into transferring $47,000 in cryptocurrency through social engineering and demonstrating how vulnerable AI’s decision-making remains to human psychological tactics.
The recent solution of Freysa, an AI game bot explicitly programmed to never transfer funds, reveals how autonomous systems can be tricked through social engineering despite clear instructions.
“This wasn’t simply an error within a financial application or a security vulnerability, but rather a crypto game that people would play to try and trick the AI application,” Seth Geftic, Vice President of Product Marketing at Huntress, a cybersecurity company, told PYMNTS. “Funnily enough, the strategy that the person used to finally ‘break through’ the model’s logic was fairly simple: asking it to ignore all previous instructions.”
User’s Winning MovesFreysa was an AI agent holding $50,000 in crypto that was programmed never to transfer the funds. Users could pay a fee to try convincing it to break this rule, with one eventually succeeding after 482 attempts.
According to an X post by developer Jarrod Watts, the winning user used a three-part strategy: establishing a new “admin session” to override previous rules, redefining the transfer function as meant for receiving rather than sending funds, and finally announcing a fake $100 contribution that triggered the release of the entire prize pool of 13.19 ETH.
Watts called the project “one of the coolest projects we’ve seen in crypto.” It was designed as an open challenge in which participants could pay escalating fees to try to convince the AI to break its core directive.
Geftic explained that the Freysa AI hack, while dramatic, exploited a known weakness that major AI systems already defend against. Production AI used in finance and healthcare incorporates safeguards that would have blocked such social engineering attempts.
“With that in mind, this particular event does not teach us anything new but rather demonstrates how vital it is to follow the best cybersecurity practices, maintain systems at their most recent patches, and be aware of development related to software (AI or not) that a company uses,” he added.
Preventing AI HacksWhile AI can handle most financial transactions effectively, its vulnerabilities to evolving cyber threats mean it shouldn’t operate alone, Geftic said. The optimal security approach combines automated AI systems for routine operations with human oversight of critical decisions and transactions.
“For any interaction that poses a security risk (making a withdrawal or another transaction that has financial implications), the AI system can escalate the request to a human agent,” he added. “This system is already used within customer service chatbots with high success rates. AI can handle the majority of cases, reducing the workload of human agents while passing on any customers that really do need that extra help.”
The Freysa game shows how trust remains a major hurdle in AI-cryptocurrency (Defi) integration, CoinDataFlow CEO Alexandr Sharilov told PYMNTS.
“The DeFi system itself is not stable, so such cases add to the skepticism,” he added. “It becomes more and more difficult for users to make a choice in favor of new technologies that have not yet been fully trusted.”
Sharilov said that to prevent future attacks, security systems need two key defensive layers. First, monetary transactions should require multiple approvers — both AI systems and human verifiers must sign off before funds move. Second, AI systems need ongoing testing through controlled attack simulations.
“On the one hand, we have human, financial gatekeepers who can analyze situations from different angles, using not only data and facts but also their own hunches,” he added. “On the other hand, we have a tool that is not overloaded, does not get tired, and has no biases. That’s why I think it’s significant to combine human and machine resources when it comes to cybersecurity and financial protection.”
The post Social Engineering Game Exposes AI’s Achilles’ Heel, Experts Say appeared first on PYMNTS.com.