OpenAI finds more Chinese groups using ChatGPT for malicious purposes

OpenAI Says Malicious Actors Are Manipulating AI Models for Foreign Influence

OpenAI has disclosed new findings on the misuse of its [ChatGPT](https://aiapps.com/items/chatgpt) tool by Chinese-linked groups seeking to further state-aligned influence campaigns. The organization's latest threat intelligence report outlines novel tactics used by these actors to amplify messaging, monitor online dissent, and assist in code development for state-sponsored activities1.

AI-Powered Monitoring and Propaganda Efforts

OpenAI reports that suspected Chinese state-linked entities have leveraged ChatGPT in multiple ways to boost their online influence operations. Key activities identified include:
  • Generating short to medium-length comments targeting groups critical of China and U.S. policies.
  • Creating detailed descriptions and sales pitches for the "Qianyue Overseas Public Opinion AI Assistant," an alleged AI social media tool designed to ingest and analyze content across a wide range of platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit.
  • Editing and debugging code for AI-powered monitoring tools intended to track conversations related to Chinese political and social topics, notably human rights and protests.
OpenAI’s report notes that the primary goal of these monitoring tools appeared to be the identification of online calls for demonstrations about China's human rights record, with subsequent intelligence reportedly sent to Chinese embassies or intelligence services across countries like the United States, Germany, and the United Kingdom1.

No Verified Evidence of Direct Public Influence

While OpenAI flagged the creation of these tools and the use of ChatGPT for their development and content production, the report states that it was unable to independently verify the authenticity of certain claims — particularly regarding whether the comments generated using AI tools were posted or if the social-media monitoring tool was implemented at scale1.

OpenAI's Response and Ongoing Efforts

OpenAI reiterated its commitment to detecting and countering abuse of its systems. This includes actively identifying malicious user accounts, preventing repeat usage, and exposing attempts to leverage AI for misinformation, surveillance, or hacking-related purposes. The company stressed its ongoing efforts to stay ahead of adversaries innovating in the misuse of advanced AI technologies1. 1Source: [OpenAI Threat Intelligence Report, February 2025](https://cdn.openai.com/threat-intelligence-reports/disrupting-malicious-uses-of-our-models-february-2025-update.pdf)

Latest AI News

Stay Informed with the Latest news and trends in AI