The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 

AI Chatbots Could Become Cybercriminals’ Latest Weapon

DATE POSTED:September 26, 2024

Artificial intelligence (AI) chatbots, often heralded for their productivity benefits, now threaten cybersecurity as criminals harness them to create sophisticated malware.

HP Wolf Security researchers have uncovered one of the first known instances where attackers used generative AI to write malicious code for distributing a remote access Trojan. This trend marks a shift in cybersecurity, democratizing the ability to create complex malware and potentially leading to a surge in cybercrime.

“If your company is like many others, hackers have infiltrated a tool your software development teams are using to write code. Not a comfortable place to be,” Lou Steinberg, founder and managing partner at CTM Insights and former CTO of TD Ameritrade, told PYMNTS. 

Double-Edged Sword

Developers often rely on AI chatbots like ChatGPT for code generation and translation between programming languages. 

“These chatbots have become full-fledged members of your development teams. The productivity gains they offer are, quite simply, impressive,” Steinberg said.

However, this reliance on AI comes with risks. These AI tools learn from vast amounts of open-source software, which may contain design errors, bugs, or even deliberately inserted malware.

“Letting open-source train your AI tools is like letting a bank-robbing getaway driver teach high school driver’s ed. It has a built-in bias to teach something bad,” Steinberg cautioned. With over a billion open-source contributions made annually, the risk of malicious code seeping into AI training data is substantial.

Morey Haber, chief security adviser at BeyondTrust, explained how criminals exploit AI-powered chatbots to automate malware creation. 

“They are generating components for attacks with minimal technical expertise,” Haber told PYMNTS. “For example, they can ask the chatbot to create scripts, like a PowerShell script that disables email boxes, without knowing the underlying code.”

This capability allows even novice attackers, often referred to as “script kiddies,” to craft sophisticated phishing emails, malware payloads, or ransomware. “These chatbots make it easier for attackers to innovate their techniques,” Haber said.

Staying Safe

To counter these threats, security professionals must update their strategies. Steinberg said companies should “carefully inspect and scan code written by generative AI.”

Traditional malware detection methods may no longer suffice, because AI-generated code changes with each iteration. “Use static behavioral scans and software composition analysis to detect design flaws or malicious behavior in generated software,” he said.

Haber said organizations should focus on training users to recognize AI-enhanced attacks, such as AI-generated phishing emails and deepfake technologies. “Using anomaly detection, predictive analytics, and continuous monitoring tools can help identify and block AI-driven threats before they cause damage.”

AI developers and cybersecurity experts are joining forces to mitigate the risks posed by AI-generated malware. “They are building frameworks and detection systems like MITRE and NIST guidelines,” Haber said.

Companies are also urged to establish internal AI policies and guidelines. “Ensure that sensitive data is not inadvertently disclosed or exploited through AI,” Haber said, cautioning against relying on AI for both code generation and testing: “It’s like asking a fox to check the henhouse for foxes.”

Steinberg stressed the importance of vigilance when using AI in software development. “If you are going to trust generated code, the old adage to ‘trust, but verify’ applies.”

Unsophisticated attackers are successfully using AI to target vulnerabilities, Yashin Manraj, the CEO of Pvotal Technologies, told PYMNTS. 

“Launching a dozen new AI chatbots that leverage the vast wealth of indexed ransomware, malware and other code snippets to exploit older infrastructure and unpatched vulnerabilities has helped increase the number of successful cyberattacks,” Manraj said. 

Regarding AI chatbots’ capabilities in malware development, Manraj said, “We have yet to see AI chatbots outpacing cutting-edge security professionals.” However, he warned that “the volume of attacks on outdated systems has increased.”

Manraj said, “Developers are using more cryptographic tools, secure application signing, and AI-detection methods to help users differentiate between legitimate and malicious applications.”

He added that there are efforts to create “more sandboxed environments, reduce the access applications receive by default, and eliminate the ability of single deprecated software to impact an entire [system].”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post AI Chatbots Could Become Cybercriminals’ Latest Weapon appeared first on PYMNTS.com.