Disrupting the malicious use of artificial intelligence by state-linked threat actors


Based on collaboration and information sharing with Microsoft, we have disrupted five state-linked malicious actors: two China-linked threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-linked threat actor known as Crimson Sandstorm; the North Korean-linked actor known as Emerald Sleet; and a Russian-linked actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors have been terminated.

These actors generally sought to use OpenAI services to query open source information, translate, find coding errors, and perform basic coding tasks.

Especially:

  • Charcoal Typhoon used our services to research various cybersecurity companies and tools, debug code and generate scripts, and create content that could be used in phishing campaigns.
  • Salmon Typhoon used our services to translate technical documents, retrieve publicly available information on multiple intelligence agencies and regional threat actors, help with coding, and research common ways processes can be hidden in a system.
  • Crimson Sandstorm used our services to script support related to web and app development, generate content likely for spear-phishing campaigns, and research common ways malware can evade detection.
  • Emerald Sleet used our services to identify defense-focused experts and organizations in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
  • Forest Blizzard used our services primarily for open source research on satellite communication protocols and radar imaging technology, as well as support with scripting tasks.

Additional technical details on the nature of the threat actors and their activities can be found at Microsoft blog post published today.

The activities of these actors are in line with the previous ones red team ratings conducted in partnership with external cybersecurity experts, who found that GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI-powered tools.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *