Microsoft and OpenAI Issue Warning: Nation-States Harnessing AI as Powerful Arsenal in Cyber Warfare
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber-attack operations.
A joint report from Microsoft and OpenAI reveals that efforts by five state-affiliated actors to employ AI services for malicious cyber activities were disrupted, with both entities terminating their assets and accounts.
“Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships,” Microsoft said in a report shared.
For instance, the Russian nation-state group tracked as Forest Blizzard (aka APT28) is said to have used its offerings to conduct open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.
Some of the other notable hacking crews are listed below –
- Emerald Sleet (aka Kimusky), a North Korean threat actor, has used LLMs to identify experts, think tanks, and organizations focused on defense issues in the Asia-Pacific region, understand publicly available flaws, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
- Crimson Sandstorm (aka Imperial Kitten), an Iranian threat actor who has used LLMs to create code snippets related to app and web development, generate phishing emails, and research common ways malware could evade detection.
- Charcoal Typhoon (aka Aquatic Panda), a Chinese threat actor which has used LLMs to research various companies and vulnerabilities, generate scripts, create content likely for use in phishing campaigns, and identify techniques for post-compromise behavior.
- Salmon Typhoon (aka Maverick Panda), a Chinese threat actor which has used LLMs to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, resolve coding errors, and find concealment tactics to evade detection.
Microsoft also said it’s also formulating a set of principles to mitigate the risks posed by the malicious use of AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates and conceive effective guardrails and safety mechanisms around its models.
While the prospect of AI-enhanced nation-state cyber operations may seem daunting, it’s reassuring that the observed LLM abuses have not yet resulted in particularly novel or devastating attack techniques1. However, vigilance remains crucial in the ever-evolving landscape of cybersecurity.
You think you have a story worth everyone’s time? SUBMIT A STORY and we will publish it.
Share this content:
Post Comment