Software developers have embraced “artificial intelligence” language models for code generation in a big way, with huge gains in productivity but also some predictably dubious developments. It’s no surprise that hackers and malware writers are doing the same.
According to recent reports, there have been several active malware attacks spotted with code that’s at least partially generated by AI.
BleepingComputer chronicles multiple attacks using suspected AI-written code, with reports from Proofpoint and HP making the case that these tools were generated in ways that no longer need the technical expertise normally required for large-scale malware attacks. You could call it the democratization of hacking.
The attacks used fairly straightforward vectors, HTML, VBScript, and JavaScript, with code that was more broad and less targeted. So, these attacks are most effective when set up as a download hidden within a ZIP file (or some other conventional attack method).
It’s the kind of thing power users are already wary about — or at least should be — with decades of these kinds of attacks long before AI code emerged. Complex and specifically targeted attacks, like the recent PKfail disaster, are probably outside of the reach of broad code generation like this for now.
But there’s still cause for concern. These tools could exponentially increase the prevalence of simpler attacks on web users, requiring extra diligence from users (especially on Windows) and making virus and malware protection even more crucial.
I’m more worried about the combination of skilled malware developers and AI generation tools. Even if you can’t train an AI to write brilliant code, a talented developer could use AI to automate their processes and become far more efficient. As always, keep those antivirus scanners up and don’t download from unknown sources.
Further reading: The best antivirus software on Windows