OpenAI Disrupts Covert Influence Operations With The Help of OSINT

00:00:00
/
00:08:30

July 9th, 2024

8 mins 30 secs

Season 3

Your Host

About this Episode

Key Points Discussed:

• Monitoring and Disruption Efforts: OpenAI collaborates with open-source intelligence practitioners to monitor internet activity and identify potential misuse of their language models by nation-states and other actors. They aim to disrupt sophisticated threats through continuous improvements in their safety systems and collaboration with industry partners.

• Recent Trends: OpenAI has detected and disrupted operations from actors in Russia, China, Iran, and a commercial company in Israel. These operations, including ones named "Bad Grammar" and "Doppelganger," used AI to generate content but failed to engage authentically with audiences.

• Techniques and Tactics: The actors use AI to produce high volumes of content, mixing AI-generated and traditional formats, and faking engagement by generating replies to their own posts. Despite these efforts, they struggled to reach authentic audiences.

• Defensive Strategies: OpenAI employs defensive design policies, such as friction-imposing features, to thwart malicious use. They also share detailed threat indicators with industry peers to enhance the effectiveness of disruptions.

• Case Studies: Examples include Russian and Chinese networks targeting various regions with limited engagement, and an Iranian network generating anti-US and anti-Israeli content. These operations highlight the ongoing challenge of AI misuse.

• Open Source Intelligence: Dekens discusses his work with Shadow Dragon, including a white paper on using open-source intelligence to identify and monitor troll and bot armies. He explains how prompt error messages can be a key indicator of malicious activity.