OpenAI Stops AI-Powered Deceptive Influence in 5 Campaigns

As an analyst with a background in technology and cybersecurity, I find the recent disclosures by OpenAI concerning the use of its AI technology in covert influence operations both alarming and intriguing. It’s a stark reminder of how advanced AI models can be weaponized to manipulate public opinion and sway political outcomes.


OpenAI, a leading Artificial Intelligence company founded by Sam Altman, recently revealed that it has taken action against several clandestine campaigns attempting to misuse its technology for manipulating public opinion. Specifically, on May 30th, the organization announced the termination of accounts engaged in covert influence activities.

As a crypto investor, I’ve been closely following OpenAI’s recent announcements. And let me tell you, I was thrilled to learn that over the past three months, we’ve successfully disrupted no less than five clandestine operations trying to exploit our models for deceitful activities online.

These individuals used artificial intelligence (AI) to produce comments for articles, establish social media personas, and review and correct text translations.

How we’re disrupting attempts by covert influence operations to use AI deceptively:

— OpenAI (@OpenAI) May 30, 2024

One operation, named “Spamouflage,” employed OpenAI’s resources for investigating social media and generating multilingual content on various platforms such as X, Medium, and Blogspot. This was done with the intention of shaping public sentiment and impacting political decisions. Additionally, they utilized AI technology for troubleshooting coding issues and overseeing databases and websites.

In an alternate campaign dubbed “Bad Grammar,” four countries including Ukraine, Moldova, the Baltic States, and the United States were the focus. This covert initiative employed OpenAI models to operate Telegram bots and elicit political results. Additionally, AI technology was utilized for debugging code, administering databases, and managing websites within this group’s activities.

A third group, referred to as “Doppelgangers,” employed artificial intelligence technology to produce remarks in multiple languages such as English, French, German, Italian, and Polish. Their objective was to disseminate these comments on popular platforms like X and 9GAG with the aim of influencing public sentiment.

An Iranian cyberactor named the “International Union of Virtual Media” (IUVM) consistently publishes online content that advocates for Iran while disparaging both Israel and the United States.

As an analyst, I’ve noticed that a commercial company based in Israel, named STOIC, has been producing content related to the Gaza conflict, the Histadrut trade unions organization in Israel, and the Indian elections. This project, which we refer to as “Zero Zeno,” is inspired by the founder of the Stoic school of philosophy. However, the engagement with their campaigns has been relatively low.

In their declaration, OpenAI noted that the topics addressed by diverse entities encompassed a broad spectrum, including Russia’s intrusion into Ukraine, clashes in Gaza, Indian elections, European and American politics, and criticisms of China’s administration from both Chinese dissidents and foreign governments.

I recently came across a report written by Ben Nimmo, who is a principal investigator at OpenAI. He shared with The New York Times some real-life examples of the most prominent and longest-running influence campaigns that are currently making headlines.

As a researcher studying OpenAI’s recent developments, I discovered that despite the unveiling of new operations, there wasn’t a noticeable boost in audience engagement or reach through their services according to their own observations.

Read More

2024-05-31 09:56