OpenAI forms advisory council on wellbeing and AI<p style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;">OpenAI </span><a target="_blank" class="link" href="
https://openai.com/index/expert-council-on-well-being-and-ai/" data-i13n="cpos:1;pos:1"><span style="color:rgb(17, 85, 204);font-family:Arial, sans-serif;">announced</span>[/url]<span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;"> today that it is creating an advisory council centered on its users' mental and emotional wellness. The Expert Council on Well-being and AI comprises eight researchers and experts on the intersection of technology and mental health. Some of the members were experts that OpenAI consulted as it developed </span><a target="_blank" class="link" href="
https://www.engadget.com/ai/openai-is-adding-parental-controls-to-chatgpt-144128085.html" data-i13n="cpos:2;pos:1"><span style="color:rgb(17, 85, 204);font-family:Arial, sans-serif;">parental controls</span>[/url]<span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;">. Topics of safety and protecting younger users have become more of a talking point for all artificial intelligence companies, including OpenAI, after lawsuits questioned their complicity in </span><a target="_blank" class="link" href="
https://www.engadget.com/ai/the-first-known-ai-wrongful-death-lawsuit-accuses-openai-of-enabling-a-teens-suicide-212058548.html" data-i13n="cpos:3;pos:1"><span style="color:rgb(17, 85, 204);font-family:Arial, sans-serif;">multiple</span>[/url]<span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;"> </span><a target="_blank" class="link" href="
https://www.engadget.com/ai/another-lawsuit-blames-an-ai-company-of-complicity-in-a-teenagers-suicide-184529475.html" data-i13n="cpos:4;pos:1"><span style="color:rgb(17, 85, 204);font-family:Arial, sans-serif;">cases</span>[/url]<span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;"> where teenagers committed suicide after sharing their plans with AI chatbots.</span></p><p style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;">This move sounds like a wise addition, but the effectiveness of any advisor hinges on listening to their insights. We've seen other tech companies establish and then utterly ignore their advisory councils; </span><a target="_blank" class="link" href="
https://www.engadget.com/social-media/meta-safety-advisory-council-says-the-companys-moderation-changes-prioritize-politics-over-safety-140026965.html" data-i13n="cpos:5;pos:1"><span style="color:rgb(17, 85, 204);font-family:Arial, sans-serif;">Meta</span>[/url]<span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;"> is one of the notable recent examples. And the announcement from OpenAI even acknowledges that its new council has no real power to guide its operations: "We remain responsible for the decisions we make, but we’ll continue learning from this council, the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being." It may become clearer how seriously OpenAI is taking this effort when it starts to disagree with the council, whether the company is genuinely committed to mitigating the serious risks of AI or whether this is a smoke and mirrors attempt to paper over its issues.</span></p>This article originally appeared on Engadget at
https://www.engadget.com/openai-forms-advisory-council-on-wellbeing-and-ai-183815365.html?src=rssSource:
OpenAI forms advisory council on wellbeing and AI