OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks

OpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development.

This includes a Russian‑language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator also used several ChatGPT accounts to prototype and troubleshoot technical components that enable post‑exploitation and credential theft.

“These accounts appear to be affiliated with Russian-speaking criminal groups, as we observed them posting evidence of their activities in a Telegram channel dedicated to those actors,” OpenAI said.

The AI company said while its large language models (LLMs) refused the threat actor’s direct requests to produce malicious content, they worked around the limitation by creating building-block code, which was then assembled to create the workflows.

Some of the produced output involved code for obfuscation, clipboard monitoring, and basic utilities to exfiltrate data using a Telegram bot. It’s worth pointing out that none of these outputs are inherently malicious on their own.

“The threat actor made a mix of high‑ and lower‑sophistication requests: many prompts required deep Windows-platform knowledge and iterative debugging, while others automated commodity tasks (such as mass password generation and scripted job applications),” OpenAI added.

“The operator used a small number of ChatGPT accounts and iterated on the same code across conversations, a pattern consistent with ongoing development rather than occasional testing.”

The second cluster of activity originated from North Korea and shared overlaps with a campaign detailed by Trellix in August 2025 that targeted diplomatic missions in South Korea using spear-phishing emails to deliver Xeno RAT.

DFIR Retainer Services

OpenAI said the cluster used ChatGPT for malware and command-and-control (C2) development, and that the actors engaged in specific efforts such as developing macOS Finder extensions, configuring Windows Server VPNs, or converting Chrome extensions to their Safari equivalents.

In addition, the threat actors have been found to use the AI chatbot to draft phishing emails, experiment with cloud services and GitHub functions, and explore techniques to facilitate DLL loading, in-memory execution, Windows API hooking, and credential theft.

The third set of banned accounts, OpenAI noted, shared overlaps with a cluster tracked by Proofpoint under the name UNK_DropPitch (aka UTA0388), a Chinese hacking group which has been attributed to phishing campaigns targeting major investment firms with a focus on the Taiwanese semiconductor industry, with a backdoor dubbed HealthKick (aka GOVERSHELL).

The accounts used the tool to generate content for phishing campaigns in English, Chinese, and Japanese; assist with tooling to accelerate routine tasks such as remote execution and traffic protection using HTTPS; and search for information related to installing open-source tools like nuclei and fscan. OpenAI described the threat actor as “technically competent but unsophisticated.”

Outside of these three malicious cyber activities, the company also blocked accounts used for scam and influence operations –

  • Networks likely originating in Cambodia, Myanmar, and Nigeria are abusing ChatGPT as part of likely attempts to defraud people online. These networks used AI to conduct translation, write messages, and to create content for social media to advertise investment scams.
  • Individuals apparently linked to Chinese government entities using ChatGPT to assist in surveilling individuals, including ethnic minority groups like Uyghurs, and analyzing data from Western or Chinese social media platforms. The users asked the tool to generate promotional materials about such tools, but did not use the AI chatbot to implement them.
  • A Russian-origin threat actor linked to Stop News and likely run by a marketing company that used its AI models (and others) to generate content and videos for sharing on social media sites. The generated content criticized the role of France and the U.S. in Africa and Russia’s role on the continent. It also produced English-language content promoting anti-Ukraine narratives.
  • A covert influence operation originating from China, codenamed “Nine—emdash Line” that used its models to generate social media content critical of the Philippines’ President Ferdinand Marcos, as well as create posts about Vietnam’s alleged environmental impact in the South China Sea and political figures and activists involved in Hong Kong’s pro-democracy movement.

In two different cases, suspected Chinese accounts asked ChatGPT to identify organizers of a petition in Mongolia and funding sources for an X account that criticized the Chinese government. OpenAI said its models returned only publicly available information as responses and did not include any sensitive information.

“A novel use for this [China-linked influence network was requests for advice on social media growth strategies, including how to start a TikTok challenge and get others to post content about the #MyImmigrantStory hashtag (a widely used hashtag of long standing whose popularity the operation likely strove to leverage),” OpenAI said.

“They asked our model to ideate, then generate a transcript for a TikTok post, in addition to providing recommendations for background music and pictures to accompany the post.”

CIS Build Kits

OpenAI reiterated that its tools provided the threat actors with novel capabilities that they could not otherwise have obtained from multiple publicly available resources online, and that they were used to provide incremental efficiency to their existing workflows.

But one of the most interesting takeaways from the report is that threat actors are trying to adapt their tactics to remove possible signs that could indicate that the content was generated by an AI tool.

“One of the scam networks [from Cambodia] we disrupted asked our model to remove the em-dashes (long dash, –) from their output, or appears to have removed the em-dashes manually before publication,” the company said. “For months, em-dashes have been the focus of online discussion as a possible indicator of AI usage: this case suggests that the threat actors were aware of that discussion.”

The findings from OpenAI come as rival Anthropic released an open-source auditing tool called Petri (short for “Parallel Exploration Tool for Risky Interactions”) to accelerate AI safety research and better understand model behavior across various categories like deception, sycophancy, encouragement of user delusion, cooperation with harmful requests, and self-perseveration.

“Petri deploys an automated agent to test a target AI system through diverse multi-turn conversations involving simulated users and tools,” Anthropic said.

“Researchers give Petri a list of seed instructions targeting scenarios and behaviors they want to test. Petri then operates on each seed instruction in parallel. For each seed instruction, an auditor agent makes a plan and interacts with the target model in a tool use loop. At the end, a judge scores each of the resulting transcripts across multiple dimensions so researchers can quickly search and filter for the most interesting transcripts.”


Source: thehackernews.com…

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *