
Microsoft reports that hackers are integrating AI into every phase of operations, from reconnaissance and phishing to malware development and post-compromise activity. The company described AI as a “force multiplier” that reduces technical barriers and accelerates execution for attackers of varying skill levels.
This development signals a fundamental shift in the cyber threat landscape, where artificial intelligence lowers the technical barrier for entry and enables attackers to operate at unprecedented scale. The findings indicate that AI is no longer a theoretical risk but a practical tool actively exploited by both sophisticated and low-skilled threat actors.
North Korean threat groups Jasper Sleet and Coral Sleet are using generative AI to power fake employment schemes targeting Western companies. Jasper Sleet actors use AI tools to generate culturally appropriate name lists, tailor fake resumes to specific job postings, and craft professional communications to sustain long-term employment once hired, according to CyberScoop.
Microsoft observed Jasper Sleet using the AI application Faceswap to insert North Korean IT workers’ faces into stolen identity documents and create polished headshots for resumes. The group also deploys voice-altering technology during virtual interviews to disguise accents, enabling operatives to pose as Western candidates, The Guardian reported.
“Jasper Sleet employs AI throughout the entire attack process to secure employment, maintain employment, and exploit access on a large scale,” Microsoft stated.
Coral Sleet uses AI coding tools to generate and refine malware components, create fake company websites, provision remote infrastructure, and rapidly test payloads. The group has jailbroken large language models to generate malicious code that bypasses built-in safety controls, according to Microsoft’s blog post.
Microsoft noted early experimentation by threat actors with agentic AI, where models support iterative decision-making and task execution, though this has not yet been observed at scale.
Google reported in February that its Threat Intelligence Group observed threat actors using AI to gather information, create phishing campaigns, and develop malware. Amazon documented a campaign in which a Russian-speaking hacker used generative AI services to breach more than 600 FortiGate firewalls across 55 countries in five weeks, demonstrating how AI enabled an attacker with limited skills to operate at a scale previously requiring a larger, more capable team.
Microsoft advised organizations to treat AI-powered IT worker schemes as insider risks and to focus on detecting abnormal credential use, hardening identity systems against phishing, and securing AI systems that may themselves become targets.