OpenAI sees rise in China-based ChatGPT abuse

OpenAI reports on Cyber Threats using ChatGPT.
We should probably be more worried about what ChatGPT doesn’t catch. (Picture: howtostartablogonline.net, CC BY 2.0)
China and Iran are using ChatGPT for influence operations, while North Korea and Russia looks for jobs backdoors and malicious code.

Out of the ten campaigns identified in OpenAIs new report «Disrupting Malicious Uses of AI», four were from China.

Supercharging influence ops
Chinese groups have used ChatGPT for mostly adversarial influence operations, writes Reuters, generating social media posts on political tops including on a Taiwanese video game, accusations against a Pakistani activist and content related to the closure of USAID.

These groups also made content criticizing American tariffs, and polarizing posts supporting both sides of hot button issues in order to destabilize US politics.

Jobs and code
North Korea is no stranger to ChatGPT either, where threat actors used it to create fake resumes and job applications to gain access to sensitive systems, which has been met with remarkable success, writes Wired.

The study also reveals the work of a Russian operation it names «ScopeCreep.» This group was making malware, and was smart enough to use temporary accounts for incremental improvements to their code, and then went on to make a new account for the next improvement.

ScopeCreep were making malware for Windows, attempting to use a trojan horse to run it every time Python.exe was launched and to make changes to Windows Defender to allow it to run.

These accounts are now banned; while «our models were utilized to speed up malware development operations, they also provided an opportunity for us to identify and disrupt the threat,» writes OpenAI.

Iran is no stranger
Iranian groups are no slouches on ChatGPT either, using it in Persian to generate short comments in English or Spanish to post on X/Twitter, posing as residents of the US, UK, Ireland and Venezuela.

These accounts generally posted about the strength of Iran and on divisive topics in the countries involved, and was rated a one out of six on the Breakout Scale, likely not reaching a lot of people.

While many of these threat actors show how ChatGPT can be used as an efficient propaganda and coding tool, these are the attempts that OpenAI caught and stopped — and we don’t know the full scope of how the technology being used every day by actors like these.

Read more: OpenAI: Disrupting malicious uses of AI. WSJ picks up on the Cina angle, Reuters, too. The Register is more balanced.