|
Getting your Trinity Audio player ready...
|
By: Austin Alonzo
Multiple foreign adversaries, including China and North Korea, are utilizing OpenAI’s popular generative artificial intelligence tools to enhance their espionage and influence operations and commit fraud, according to the company.
OpenAI released a threat report to a group of select media outlets that said it has shut down a number of covert influence and cyber operations tied to foreign actors. The details of the report, published on June 5 by Reuters, said OpenAI has disrupted multiple operations and banned accounts tied to what it identified as malicious use of its AI models.
Despite the growing sophistication of these campaigns, representatives of OpenAI told reporters there is little evidence that the use of AI resulted in greater reach or impact.
Generative AI is a type of artificial intelligence that can create human-like content, like text, images, or audio. OpenAI’s ChatGPT and Sora are among the most popular generative AI tools currently available to the public. Other massive companies like Google parent Alphabet, Microsoft, and Amazon offer competing services.
OpenAI, the maker of ChatGPT and other popular generative AI products, did not immediately respond to a request for comment from The Epoch Times.
So far, the report said, OpenAI has halted four operations likely tied to the Chinese communist regime. These activities included influence operations, social engineering attempts, as well as digital surveillance efforts spanning multiple platforms and languages.
For example, Chinese actors used generative AI to create content used on social media platforms like TikTok, Reddit, X, and Facebook. These included general criticism of U.S. foreign policy and tariff efforts as well as a smear campaign aimed at a Taiwanese studio that published a videogame satirizing the Chinese Communist Party.
Accounts linked to Chinese actors also used AI to generate biographies for fake social media accounts purporting to be journalists, to translate text of intercepted documents from English to Chinese, and to analyze intercepted documents.
In February, OpenAI released a threat intelligence report detailing its previous investigations of what it called threat actors that appeared to originate from China.
At the time, it detailed one instance of agents likely tied to the Chinese communist regime using OpenAI’s technology to develop a surveillance tool for Chinese intelligence services that would watch over social media posts to gauge public opinion about various political topics. In another instance, actors likely based in China used ChatGPT accounts to generate social media content in English and long-form articles in Spanish.
North Korea and Other State Actors
The June report from OpenAI also detailed continuing efforts by North Korean operatives to get information technology jobs at American companies.
In February, OpenAI disclosed that it had banned a number of accounts potentially linked to North Korea that had used OpenAI tools to create fake job applicants, including matching resumes, cover letters, and online job profiles. These phony workers applied for jobs in order to raise money for the North Korean communist regime and gain access to sensitive data.
Some of the phony applicants used OpenAI tools to help them pass interviews and technical assessments. OpenAI said in February that a number of the fraudulent candidates were even hired, and then North Korean-linked accounts used OpenAI tools to write code, troubleshoot, and message coworkers.
“They also used our models to devise cover stories to explain unusual behaviors such as avoiding video calls, accessing corporate systems from unauthorized countries or working irregular hours,” the February report said.

