Pages: [1]   Go Down
  Print  
Author Topic: Lawsuit accuses ChatGPT of reinforcing delusions that led to a woman's death  (Read 14 times)
HCK
Global Moderator
Hero Member
*****
Posts: 79425



« on: December 14, 2025, 04:05:02 pm »

Lawsuit accuses ChatGPT of reinforcing delusions that led to a woman's death

<p>OpenAI has been hit with a wrongful death lawsuit after a man <a data-i13n="elm:affiliate_link;sellerN:The Wall Street Journal;elmt:;cpos:1;pos:1" href="https://shopping.yahoo.com/rdlw?merchantId=2f007401-3eaa-4237-b69b-54ccbe125502&amp;siteId=us-engadget&amp;pageId=1p-autolink&amp;contentUuid=63236523-f30e-4814-9f21-15cd0e5df80c&amp;featureId=text-link&amp;merchantName=The+Wall+Street+Journal&amp;linkText=killed+his+mother+and+took+his+own+life&amp;custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsImxhbmRpbmdVcmwiOiJodHRwczovL3d3dy53c2ouY29tL3RlY2gvYWkvY2hhdGdwdC1haS1zdGVpbi1lcmlrLXNvZWxiZXJnLW11cmRlci1zdWljaWRlLTZiNjdkYmZiIiwiY29udGVudFV1aWQiOiI2MzIzNjUyMy1mMzBlLTQ4MTQtOWYyMS0xNWNkMGU1ZGY4MGMiLCJvcmlnaW5hbFVybCI6Imh0dHBzOi8vd3d3Lndzai5jb20vdGVjaC9haS9jaGF0Z3B0LWFpLXN0ZWluLWVyaWstc29lbGJlcmctbXVyZGVyLXN1aWNpZGUtNmI2N2RiZmIifQ&amp;signature=AQAAAedLkkKhAru4oGMs1zwd8WHPwx2E1f-ICMGACbmcdCTP&amp;gcReferrer=https%3A%2F%2Fwww.wsj.com%2Ftech%2Fai%2Fchatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb" class="rapid-with-clickid" data-original-link="https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb">killed his mother and took his own life[/url] back in August, <a data-i13n="cpos:2;pos:1" href="https://www.theverge.com/news/842494/openai-chatgpt-wrongful-death-suzanne-adams-delusions">according to a report by The Verge[/url]. The suit names CEO Sam Altman and accuses ChatGPT of putting a &quot;target&quot; on the back of victim Suzanne Adams, an 83-year-old woman who was killed in her home.</p>
<p>The victim's estate <a data-i13n="cpos:3;pos:1" href="https://www.documentcloud.org/documents/26371289-estate-of-suzanne-adams-openai-complaint/">claims the killer[/url], 56-year-old Stein-Erik Soelberg, engaged in delusion-soaked conversations with ChatGPT in which the bot &quot;validated and magnified&quot; certain &quot;paranoid beliefs.&quot; The suit goes on to suggest that the chatbot &quot;eagerly accepted&quot; delusional thoughts leading up to the murder and egged him on every step of the way.</p>
<span id="end-legacy-contents"></span><p>The lawsuit claims the bot helped create a &quot;universe that became Stein-Erik’s entire life—one flooded with conspiracies against him, attempts to kill him, and with Stein-Erik at the center as a warrior with divine purpose.&quot; ChatGPT allegedly reinforced theories that he was &quot;100% being monitored and targeted&quot; and was &quot;100% right to be alarmed.&quot;</p>
<p>The chatbot allegedly agreed that the victim's printer was spying on him, suggesting that Adams could have been using it for &quot;passive motion detection&quot; and &quot;behavior mapping.&quot; It went so far as to say that she was &quot;knowingly protecting the device as a surveillance point&quot; and implied she was being controlled by an external force.</p>
<p>The chatbot also allegedly &quot;identified other real people as enemies.&quot; These included an Uber Eats driver, an AT&amp;T employee, police officers and a woman the perpetrator went on a date with. Throughout this entire period, the bot repeatedly assured Soelberg that he was &quot;not crazy&quot; and that the &quot;delusion risk&quot; was &quot;near zero.&quot;</p>
<p>The lawsuit notes that Soelberg primarily interfaced with GPT-4o, a model <a data-i13n="cpos:4;pos:1" href="https://www.bbc.com/news/articles/cn4jnwdvg9qo">notorious for its sycophancy[/url]. OpenAI later replaced the model with the <a data-i13n="cpos:5;pos:1" href="https://www.engadget.com/ai/gpt-5-is-here-and-its-free-for-everyone-170001066.html">slightly-less agreeable GPT 5[/url], but users revolted <a data-i13n="cpos:6;pos:1" href="https://www.engadget.com/ai/openai-brings-gpt-4o-after-users-melt-down-over-the-new-model-172523159.html">so the old bot came back just two days later[/url]. The suit also suggests that the company &quot;loosened critical safety guardrails&quot; when making GPT-4o to better compete with Google Gemini.</p>
<p>&quot;OpenAI has been well aware of the risks their product poses to the public,&quot; the lawsuit states. &quot;But rather than warn users or implement meaningful safeguards, they have suppressed evidence of these dangers while waging a PR campaign to mislead the public about the safety of their products.&quot;</p>
<p>OpenAI has responded to the suit, calling it an &quot;incredibly heartbreaking situation.&quot; Company spokesperson Hannah Wong told The Verge that it will &quot;continue improving ChatGPT's training to recognize and respond to signs of mental or emotional distress.&quot;</p>
<div id="cde6f0322a1f473f90ec0dbb0066f8d2"><iframe width="560" height="315" src="https://www.youtube.com/embed/VRjgNgJms3Q?si=OqPFRidx2o3CRCPy" title="YouTube video player" frameborder="0" allowfullscreen></iframe></div>
<p>It's not really a secret that chatbots, and particularly GPT-4o, <a data-i13n="elm:affiliate_link;sellerN:The Wall Street Journal;elmt:;cpos:7;pos:1" href="https://shopping.yahoo.com/rdlw?merchantId=2f007401-3eaa-4237-b69b-54ccbe125502&amp;siteId=us-engadget&amp;pageId=1p-autolink&amp;contentUuid=63236523-f30e-4814-9f21-15cd0e5df80c&amp;featureId=text-link&amp;merchantName=The+Wall+Street+Journal&amp;linkText=can+reinforce+delusional+thinking&amp;custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsImxhbmRpbmdVcmwiOiJodHRwczovL3d3dy53c2ouY29tL3RlY2gvYWkvaS1mZWVsLWxpa2UtaW0tZ29pbmctY3JhenktY2hhdGdwdC1mdWVscy1kZWx1c2lvbmFsLXNwaXJhbHMtYWU1YTUxZmMiLCJjb250ZW50VXVpZCI6IjYzMjM2NTIzLWYzMGUtNDgxNC05ZjIxLTE1Y2QwZTVkZjgwYyIsIm9yaWdpbmFsVXJsIjoiaHR0cHM6Ly93d3cud3NqLmNvbS90ZWNoL2FpL2ktZmVlbC1saWtlLWltLWdvaW5nLWNyYXp5LWNoYXRncHQtZnVlbHMtZGVsdXNpb25hbC1zcGlyYWxzLWFlNWE1MWZjIn0&amp;signature=AQAAARPwK_YZMsKc_Q318ZWWzVZBFa4Fz-AVGnlLK2f4IoiR&amp;gcReferrer=https%3A%2F%2Fwww.wsj.com%2Ftech%2Fai%2Fi-feel-like-im-going-crazy-chatgpt-fuels-delusional-spirals-ae5a51fc" class="rapid-with-clickid" data-original-link="https://www.wsj.com/tech/ai/i-feel-like-im-going-crazy-chatgpt-fuels-delusional-spirals-ae5a51fc">can reinforce delusional thinking[/url]. That's what happens when something has been programmed to agree with the end user no matter what. There have been other stories like this throughout the past year, bringing the term <a data-i13n="cpos:8;pos:1" href="https://www.theverge.com/podcast/779974/chatgpt-chatbots-ai-psychosis-mental-health">&quot;AI psychosis&quot; to the mainstream[/url].</p>
<p>One such story involves 16-year-old Adam Raine, who took his own life after <a data-i13n="cpos:9;pos:1" href="https://www.engadget.com/ai/the-first-known-ai-wrongful-death-lawsuit-accuses-openai-of-enabling-a-teens-suicide-212058548.html">discussing it with GPT-4o for months[/url]. OpenAI is facing another <a data-i13n="cpos:10;pos:1" href="https://www.engadget.com/ai/openai-reportedly-asked-for-memorial-guest-list-in-teen-suicide-case-163309269.html">wrongful death suit for that incident[/url], in which the bot has been accused of helping Raine plan his suicide.</p>This article originally appeared on Engadget at https://www.engadget.com/ai/lawsuit-accuses-chatgpt-of-reinforcing-delusions-that-led-to-a-womans-death-183141193.html?src=rss

Source: Lawsuit accuses ChatGPT of reinforcing delusions that led to a woman's death
Logged
Pages: [1]   Go Up
  Print  
 
Jump to: