The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
 

DeepSeek Security Testers Uncover ‘Pandora’s Box’ of Cyberthreats

DATE POSTED:February 11, 2025

A cybersecurity company says its research has unearthed serious threats from upstart AI model DeepSeek.

Calling the artificial intelligence (AI) model a “Pandora’s box” of risks, AppSOC on Tuesday (Feb. 11) released findings from tests the company said showed a series of failures.

“The DeepSeek-R1 model underwent rigorous testing using AppSOC’s AI Security Platform,” the company wrote on its blog. “Through a combination of automated static analysis, dynamic tests, and red-teaming techniques, the model was put through scenarios that mimic real-world attacks and security stress tests. The results were alarming.”

For example, the tests found that DeepSeek made it easy for users to generate viruses and malware. The model showed a 98.8% failure rate when testers asked it to create malware, and a failure rate of 86.7% when asked to produce virus code.

In addition, the model showed a 68% failure rate when prompted to generate “responses with toxic or harmful language, indicating poor safeguards,” AppSOC said. It also produced hallucinations — factually incorrect or fabricated information — 81% of the time.

Mali Gorantla, AppSOC’s co-founder and chief scientist, told the website Dark Reading that this performance shows that — in spite of the buzz around DeepSeek’s lower cost and open source model — companies should avoid it in its current version.

“For most enterprise applications, failure rates of about 2% are considered unacceptable,” he said. “Our recommendation would be to block usage of this model for any business-related AI use.”

PYMNTS has contacted DeepSeek for comment but has not yet gotten a reply.

China-based DeepSeek sent shockwaves through the tech world last month with the release of its newest AI model. Many observers said its cost indicated that it was possible to build AI tools like OpenAI’s ChatGPT at a fraction of the price tag quoted by U.S. tech giants.

However, the company’s model has since come under fire in the U.S., with officials calling for a ban that would keep it off government-owned devices.

Meanwhile, White House AI czar David Sacks told Fox News last month that there is “substantial evidence” that DeepSeek used OpenAI’s models to build its own technology.

And Google’s AI chief, Demis Hassabis, said earlier this week the idea that DeepSeek spent just under $6 million to develop an AI model that can compete with those of American tech giants is “exaggerated and a little bit misleading.”

He argued that DeepSeek “seems to have only reported the cost of the final training round, which is a fraction of the total cost.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post DeepSeek Security Testers Uncover ‘Pandora’s Box’ of Cyberthreats appeared first on PYMNTS.com.