The Business & Technology Network
Helping Business Interpret and Use Technology
«  

May

  »
S M T W T F S
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 

AI Explainer: How Large Transaction Models Are Securing Payment Flows

DATE POSTED:May 14, 2025

Digital transformation has opened a fire hose of new information flows, particularly around payments and commerce.

Every day, billions of payments flow through networks, banks and FinTech platforms. And while, with this explosion in digital transactions, the threat landscape has evolved dramatically, so too have the data-driven defenses available to payments stakeholders. Thanks to the sheer volume of transaction data now passing through and enabling payments environments, the good guys have finally received some financial firepower when it comes to the ongoing fraud arms race.

Enter large transaction models (LTMs), a new class of generative artificial intelligence (AI) models adapted to financial crime and poised to revolutionize security, efficiency and operational effectiveness across the financial services landscape.

For years, payments players and banks have employed traditional machine learning (ML) models to enhance transaction processing and improve user experiences by boosting conversion rates and reducing fraud. These approaches, built on clearly defined features like BIN numbers, ZIP codes and payment methods, have delivered substantial gains, but their conventional approach has intrinsic limitations.

For example, ML elements such as feature selection restrict the model’s scope, demanding separate, task-specific models for different challenges such as authorization, fraud detection and dispute resolution.

Recognizing these constraints, businesses are increasingly exploring a radically different avenue: transformer-based models, renowned for their transformative impact in the field of natural language processing (NLP).

Read more: From Faked Invoices to Faked Executives, GenAI Has Transformed Fraud

What Are Large Transaction Models?

Similar to the large language models (LLMs) that have transformed language-based tasks, LTMs harness deep learning to process and analyze vast sequences of transaction data.

Wolfgang Berner, co-founder and CPO of Hawk, last summer discussed with PYMNTS the opportunities that LTMs represent in establishing more robust, accurate and comprehensive detection and prevention mechanisms.

“The core idea is we treat transactions as sentences, teaching the transformer model the language and grammar of transactions, similar to how large language models like GPT-4 are trained on the text of the web,” Berner said. “And by doing that, it develops a very good understanding of the transactions, how transactions relate to each other, and what is genuine or possibly suspicious with them.”

Built predominantly on transformer-based architectures, these models ingest millions, even billions, of historical transaction records, learning complex patterns and behaviors to identify anomalies, predict fraudulent activities and streamline compliance processes in real time.

This technique effectively positions payments in a vast, vector-based landscape, where proximity conveys deep semantic connections. Transactions processed by the same issuer cluster together naturally, while those from identical banks reside even closer. Transactions sharing attributes such as a common email address become nearly indistinguishable in their embedding, highlighting subtle yet significant patterns invisible to traditional models.

Read also: Rise of Industrialized Fraud Heats Up Cyber Arms Race

Transforming Fraud Detection and Prevention

LTMs continuously refine their understanding based on emerging fraud tactics, making them more effective over time. Payments carry hidden sequential dependencies and intricate feature interactions, but these complexities have historically eluded capture through manual feature engineering.

“It’s become harder to monitor all the various ways that fraudsters attack businesses,” Eric Frankovic, general manager of business payments at WEX, told PYMNTS in September.

Another significant challenge in fraud detection is balancing vigilance with customer convenience. LTMs can help to distinguish subtle behavioral differences between legitimate and fraudulent transactions, markedly reducing false positives and improving customer experience.

“It is essentially an adversarial game; criminals are out to make money, and the [business] community needs to curtail that activity,” Hawk Chief Solutions Officer Michael Shearer told PYMNTS in February. “What’s different now is that both sides are armed with some really impressive technology.”

The PYMNTS Intelligence report “Leveraging AI and ML to Thwart Scammers,” a collaboration with Hawk, examined the role of artificial intelligence and machine learning to help keep fraudsters from getting the upper hand.

B2B cyber audits can help organizations assess their security posture, identify vulnerabilities and build trust with partners and clients. For C-suite leaders, these audits are not just about compliance but about safeguarding their enterprise’s long-term stability, resilience and trust.

But LTMs aren’t just about fraud prevention; they’re reshaping broader operations within banks and financial institutions. By automating vast parts of compliance and risk assessment processes, LTMs can help streamline operations. Beyond fraud and compliance, LTMs could potentially come to play critical roles in credit scoring, customer service automation and strategic decision-making.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post AI Explainer: How Large Transaction Models Are Securing Payment Flows appeared first on PYMNTS.com.