Pages: [1]   Go Down
  Print  
Author Topic: OpenAI hit by two big security issues this week  (Read 113 times)
HCK
Global Moderator
Hero Member
*****
Posts: 79425



« on: July 09, 2024, 04:05:07 pm »

OpenAI hit by two big security issues this week

<p>OpenAI seems to make headlines every day and this time it's for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.</p>
<p>Earlier this week, engineer and Swift developer Pedro José Pereira Vieito <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1" class="no-affiliate-link" href="https://x.com/pvieito/status/1808051287088353563" data-original-link="https://x.com/pvieito/status/1808051287088353563">dug into[/url] the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI's website, and since it's not available on the App Store, it doesn't have to follow Apple's sandboxing requirements. Vieito's work was then covered by <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:2;pos:1" class="no-affiliate-link" href="https://www.theverge.com/2024/7/3/24191636/openai-chatgpt-mac-app-conversations-plain-text" data-original-link="https://www.theverge.com/2024/7/3/24191636/openai-chatgpt-mac-app-conversations-plain-text">The Verge[/url], and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.</p>
<span id="end-legacy-contents"></span><p>For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.</p>
<p>The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company's internal messaging systems. <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:3;pos:1" class="no-affiliate-link" href="https://www.nytimes.com/2024/07/04/technology/openai-hack.html" data-original-link="https://www.nytimes.com/2024/07/04/technology/openai-hack.html">The New York Times[/url] reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company's board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.</p>
<p>Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.</p>
<p>App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:4;pos:1" class="no-affiliate-link" href="https://www.engadget.com/chatgpt-is-baked-into-apple-intelligence-185026662.html" data-original-link="https://www.engadget.com/chatgpt-is-baked-into-apple-intelligence-185026662.html">major players'[/url] services and how chaotic the company's <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:5;pos:1" class="no-affiliate-link" href="https://www.engadget.com/openais-board-allegedly-learned-about-chatgpt-launch-on-twitter-235643014.html" data-original-link="https://www.engadget.com/openais-board-allegedly-learned-about-chatgpt-launch-on-twitter-235643014.html">oversight[/url], <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:6;pos:1" class="no-affiliate-link" href="https://www.engadget.com/the-nations-oldest-nonprofit-newsroom-is-suing-openai-and-microsoft-174748454.html" data-original-link="https://www.engadget.com/the-nations-oldest-nonprofit-newsroom-is-suing-openai-and-microsoft-174748454.html">practices[/url] and <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:7;pos:1" class="no-affiliate-link" href="https://www.engadget.com/former-openai-google-and-anthropic-workers-are-asking-ai-companies-for-more-whistleblower-protections-175916744.html" data-original-link="https://www.engadget.com/former-openai-google-and-anthropic-workers-are-asking-ai-companies-for-more-whistleblower-protections-175916744.html">public reputation[/url] have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.</p>This article originally appeared on Engadget at https://www.engadget.com/openai-hit-by-two-big-security-issues-this-week-214316082.html?src=rss

Source: OpenAI hit by two big security issues this week
Logged
Pages: [1]   Go Up
  Print  
 
Jump to: