The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 
 

A Matter of Trust: How AI Is Reshaping Risk Assessment

DATE POSTED:March 19, 2025

Financial services institutions have long relied on machine learning for risk management, but the threat landscape has grown exponentially in the age of generative artificial intelligence (AI), according to executives from Capital One, Visa and Alloy.

Fraudsters now have access to AI tools, including synthetic identity generation and real-time phishing attacks, they said. But AI is also arming companies to supercharge their defensive capabilities, since trust and security are the foundation of financial transactions.

“Trust is the very foundation of commerce,” said Rajat Taneja, president of technology at Visa. “When two unknown parties are transacting, they have to trust that the transaction will occur correctly, the money will be transferred properly, any dispute will be managed, and there’s someone handling fraud and scams.”

Taneja said the first use case for AI in financial services was actually in risk management, and it remains a “critical tool to combat fraud, scams and enumeration attacks.” (Enumeration attacks are those that test different credentials such as passwords and usernames to gain unauthorized access.)

The challenge is that criminals now also can use AI to attack. “We have ChatGPT, they have ‘FraudGPT,’” Taneja said. “It’s a constant battle.”

FraudGPT is like the evil twin of ChatGPT: It is a malicious generative AI tool designed specifically for cybercriminal activities like creating phishing emails, undetectable malware and developing hacking tools. Subscription fees start at $200 per month and goes up to $1,700 per year.

Visa, which processes more than $16 trillion annually, has used AI for fraud detection since 1992, according to Taneja. The latest advancements in GenAI let Visa develop models that detect fraud faster and more effectively.

Shape-Shifting Adversary

Prem Natarajan, chief scientist and head of enterprise AI at Capital One, likened AI-powered fraud to organisms with fast development cycles. Unlike traditional fraud techniques, modern AI-driven scams can adapt in real time. This ability to shift allows cybercriminals to continuously tweak their approaches based on what works.

“It’s the first time we have a shape-shifter, in that, as it’s interacting with you, it can adapt itself,” he said during a panel discussion at the HumanX conference last week.

Laura Spiekerman, co-founder of identity risk management company Alloy, emphasized that while deep fakes and identity fraud dominate headlines, a more worrisome trend is the speed at which AI allows fraudsters to operate.

“We’ve seen synthetic fraud for six, seven, eight years now,” she said, explaining that this is when criminals build credible identities over time using half real credentials and half fake — and then cash out in large sums fraudulently.

“What we’re seeing AI do is actually speed that up,” Spiekerman said.

A key question in the discussion was when AI will be trusted enough to operate without human oversight. Natarajan said humans will always be involved, not just as reviewers but as end customers. “So the real question is how can we get signals from the end customer that we’re building trust with them with the AI we’re serving them?”

Natarajan cited Capital One’s Chat Concierge, a multiagent conversational AI system designed for both car buyers and dealers. It not only provides information but also takes action on customer requests. Chat Concierge has been trained on Capital One data.

“There is a human in the loop to make sure it doesn’t go awry,” Natarajan said. “We always use that as a guardrail.”

Another use case is in call centers. Capital One is using AI to help its customer service representatives. “The moment when the agent picks up the call and starts talking with you is actually a high moment of impatience for the customer and a high-stress moment for the agent,” Natarajan said.

With AI, Capital One can “simplify the agent’s experience in that moment and make it that much less stressful, transfer the burden from them to the system,” Natarajan added.

AI and Compliance

Taneja said Visa, which operates in 200 countries and processes 300 billion transactions annually, is investing in AI-driven compliance frameworks.

“Compliance is a huge foundation for us,” Taneja said. “Compliance, both in the letter and spirit [of the law], is fundamental to the DNA of a platform like ours. And you have to do it in the spirit of making sure that your data complies with privacy, security [regulations] and ensuring every rule of every country you operate in is fully adhered to.”

Spiekerman noted that financial institutions across the board, whether big or small, tend to react to fraud by shutting down operations rather than adapting their AI-driven fraud detection models.

“When these financial institutions get hit with fraud, their answer is ‘shut it down,’” Spiekerman said. Instead, they should use AI to modify their response and selectively let some customers through.

Looking ahead, Natarajan said AI will move from enabling “interactions to actions.” The future of AI in finance is one in which it can predict what actions to take, do those actions and complete them, he explained.

At Visa, Taneja was excited about the idea of “agentic commerce,” where an AI agent does some aspects of the shopping process such as doing research, price matching, checking out warranties and finding goods the human user might like to buy.

“The agentic commerce piece is very exciting,” Taneja said. “You get goosebumps thinking about what is coming next.”

The post A Matter of Trust: How AI Is Reshaping Risk Assessment appeared first on PYMNTS.com.