Watch more: What’s Next in Payments With Spreedly’s Adam Hiatt
The fraud economy has always been adaptive. For every 12-foot wall that a firm stands up, fraudsters have tried to find a corresponding 13-foot ladder.
But the rise of generative artificial intelligence (AI) and synthetic media has pushed the contest between merchants and attackers into a dramatically new phase.
“If a human can do it, we are now at a stage where the machines can do it in plausible ways,” Adam Hiatt, vice president of fraud strategy at Spreedly, told PYMNTS during a discussion for the March edition of the “What’s Next In Payments” (WNIP) series, “How Will AI Change Identity?”
In effect, fraud has been commoditized. What once required a high degree of sophisticated skill and equipment can now be executed through automated systems at scale. This is increasingly forcing companies to rethink how identity, authentication and payment security will function in a commerce landscape that is increasingly mediated by AI.
“The reality is that every human-looking signal is vulnerable,” said Hiatt.
When Fraud Becomes Software It Collapses Human SignalsThe central change happening across today’s fraud landscape is not that fraud exists in necessarily new forms, but that it has become dramatically easier to produce.
Hiatt described a ride-sharing platform whose document-verification system was designed to validate identity documents before allowing drivers to access payments. The attackers combined real facial images with synthetic document templates populated from data sets of personal information. The result was a scalable pipeline for producing convincing credentials capable of passing liveness and verification checks.
“The bad actors were able to systemically generate identity document images programmatically … composing these together into believable documents in real time,” Hiatt said.
Historically, this type of fraud would have been limited by physical constraints. Now the entire process can be automated. Among the most concerning developments are synthetic biometric signals such as deepfake faces, cloned voices and other human characteristics replicated by AI models.
“The ones that are most disruptive are the fake biometric signals, images, voices,” Hiatt said. “Those are signals that traditionally were the hardest to fake.”
But while the technology frontier is advancing rapidly, Hiatt stressed that many fraud operations still rely on tactics that are decades old.
Techniques like credential stuffing, card testing and bot-driven account takeovers remain the workhorses of cybercrime. They’re just being executed more efficiently.
“At this point, they’re still the biggest scale attacks,” said Hiatt.
Understanding the Security Reality for MerchantsFor merchants trying to navigate this shifting landscape, the biggest challenge may be uncertainty. Standards are still forming. New technologies are emerging rapidly. And the identity infrastructure underpinning digital commerce is being rebuilt in real time.
In this environment, flexibility becomes a strategic advantage. Hiatt argued that merchants should focus less on predicting which technologies will win and more on ensuring their systems can adapt.
“Being service agnostic is incredibly important. We can’t all predict the future,” he said.
For most merchants, this means adopting layered defenses rather than relying on a single method of verification. Basic tools such as IP risk scoring, VPN detection and bot-network identification still play an important role. More sophisticated defenses include device fingerprinting, location analysis and behavioral monitoring. But even these techniques are being challenged by increasingly advanced automation.
“You’re starting to need to layer everything. You need to orchestrate your responses, and you need to be able to pull these individual pieces of the puzzle together,” Hiatt said. “It’s an arms race.”
The complexity reflects the reality that fraud patterns differ by industry. High-value goods, digital services, and platforms susceptible to phishing all require different defensive strategies.
“I wish it were as easy as one size fits all,” Hiatt said.
The coming wave of AI-driven commerce could complicate the problem further. In today’s eCommerce architecture, identity verification typically happens directly between the customer and the merchant. But AI agent-driven transactions introduce new intermediaries that sit between the two and can create a distributed identity problem.
“This is the trillion-dollar question,” Hiatt said.
Why Tokenization Is Becoming EssentialIf identity verification is one emerging challenge, data protection is another. As AI systems, payment providers, merchants and platforms exchange information across increasingly complex ecosystems, the risks associated with transmitting sensitive data multiply.
“When you add more parties who are not the same business entity into a transaction flow, tokenization becomes essential,” Hiatt said.
Tokenization allows systems to exchange usable references to payment credentials without exposing the underlying information. In an AI-driven commerce environment, a single purchase could involve multiple independent platforms interacting in sequence. Passing raw payment data between them would create enormous security and compliance risks.
Looking ahead, Hiatt believes the most transformative development would be the emergence of portable digital identity systems that allow users to authenticate themselves seamlessly across platforms and merchants.
“The ability to say, ‘yes, it’s me and here is my payment,’ that’s the north star,” he said.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post Fraud Gets Cheaper, Merchants Push Back appeared first on PYMNTS.com.