Technical proficiency is no longer an absolute requirement to launch malicious attacks either, with generative AI making it easier for even novice adversaries to generate malicious codes or automate ransom negotiations.
Generative AI has been dominating the headlines recently, surpassing boundaries and expectations with its ability to summarise content, generate code, and simulate human conversations, among many use cases. But while generative AI presents opportunities for productivity gains and innovation, especially for Southeast Asia tech companies exploring AI integration, we should not discount its potential to improve attacker productivity and lower barriers to entry for adversaries. So how is generative AI being used for adversarial purposes?
Let’s start with social engineering-based attacks. In the past, we’ve looked at grammar and spelling errors as possible indicators of spear-phishing emails. Today, the signs aren’t as clear-cut as generative AI elevates social engineering threats to a new degree. Adversaries can now replicate the nuances of language and communication with alarming precision to improve their phishing attempts.

