Pages: [1]   Go Down
  Print  
Author Topic: OpenAI says it stopped multiple covert influence operations that abused its AI models  (Read 191 times)
HCK
Global Moderator
Hero Member
*****
Posts: 79425



« on: June 03, 2024, 04:05:04 pm »

OpenAI says it stopped multiple covert influence operations that abused its AI models

<p>OpenAI said that it stopped five covert influence operations that used its AI models for deceptive activities across the internet. These operations, which OpenAI shutdown between 2023 and 2024, originated from Russia, China, Iran and Israel and attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1" class="no-affiliate-link" href="https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/" data-original-link="https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/">said[/url] on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:2;pos:1" class="no-affiliate-link" href="https://downloads.ctfassets.net/kftzwdyauwt9/5IMxzTmUclSOAcWUXbkVrK/3cfab518e6b10789ab8843bcca18b633/Threat_Intel_Report.pdf" data-original-link="https://downloads.ctfassets.net/kftzwdyauwt9/5IMxzTmUclSOAcWUXbkVrK/3cfab518e6b10789ab8843bcca18b633/Threat_Intel_Report.pdf">report[/url] about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.</p>
<p>OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.</p>
<span id="end-legacy-contents"></span><p>“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:3;pos:1" class="no-affiliate-link" href="https://www.bloomberg.com/news/articles/2024-05-30/openai-shuts-down-influence-networks-using-its-tools-in-russia-china?sref=10lNAhZ9" data-original-link="https://www.bloomberg.com/news/articles/2024-05-30/openai-shuts-down-influence-networks-using-its-tools-in-russia-china?sref=10lNAhZ9">according[/url] to Bloomberg. “With this report, we really want to start filling in some of the blanks.”</p>
<p>OpenAI said that the Russian operation called “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.</p>
<p>OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:4;pos:1" class="no-affiliate-link" href="https://www.engadget.com/meta-caught-an-israeli-marketing-firm-running-hundreds-of-fake-facebook-accounts-150021954.html" data-original-link="https://www.engadget.com/meta-caught-an-israeli-marketing-firm-running-hundreds-of-fake-facebook-accounts-150021954.html">latest report[/url] on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.</p>This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss

Source: OpenAI says it stopped multiple covert influence operations that abused its AI models
Logged
Pages: [1]   Go Up
  Print  
 
Jump to: