North Korea and Iran Utilize AI for Hacking, Microsoft Reveals

Microsoft recently disclosed that North Korea and Iran are employing generative artificial intelligence (AI) for offensive cyber operations.

ADVERTISEMENT

Adversaries Exploit Generative AI

According to Microsoft, the US adversaries primarily include Iran and North Korea, with Russia and China to a lesser extent. These adversaries are now using generative AI to organize and mount offensive cyber operations. Microsoft, in collaboration with business partner OpenAI, has detected and disrupted several threats that exploited the AI technology they had developed.

In a blogpost, Microsoft stated that these techniques were at an early stage and aren't particularly unique or novel. However, it is crucial to publicly expose them, as these techniques enable US rivals to breach networks and conduct influence campaigns using large-language models.

Traditionally, cybersecurity firms have leveraged machine learning for network defense to detect anomalous behavior. However, offensive hackers and criminals also employ AI, and the introduction of large-language models like OpenAI's ChatGPT has raised the stakes.

Examples of AI Exploitation

Microsoft provided specific instances where AI was exploited by malicious groups:

- The North Korean cyber-espionage group Kimsuky used generative AI models to study foreign think tanks that focus on their country. They also generated content for spear-phishing hacking campaigns.

- Iran's Revolutionary Guard employed large-language models for social engineering, troubleshooting software errors, and studying network evasion techniques. They even generated phishing emails, including ones impersonating international development agencies and luring prominent feminists to attacker-built feminist websites.

- The Russian GRU military intelligence unit Fancy Bear explored satellite and radar technologies related to the war in Ukraine using generative AI models.

- The Chinese cyber-espionage group Aquatic Panda used large-language models to augment their technical operations.

- Another Chinese group, Maverick Panda, interacted with large-language models to evaluate their effectiveness in gathering information on sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs.

AI's Potential Threat and Future

OpenAI noted that their current GPT-4 model chatbot only offers limited capabilities for malicious cybersecurity tasks. However, cybersecurity researchers anticipate that this will change in the future.

Some critics argue that the public release of ChatGPT and similar models was rushed and irresponsible, as security concerns were not adequately addressed during development. However, it is clear that AI and large-language models will become powerful weapons for nation-state militaries in offensive operations.

There are mixed opinions within the cybersecurity community regarding Microsoft's approach. Some professionals suggest that Microsoft should prioritize making large-language models more secure instead of profiting from defensive tools for the vulnerabilities they create.