HACKINTOSH.ORG | Macintosh discussion forums

Macintosh News => Apple News => Topic started by: HCK on August 16, 2023, 04:05:05 pm



Title: OpenAI is using GPT-4 to build an AI-powered content moderation system
Post by: HCK on August 16, 2023, 04:05:05 pm
OpenAI is using GPT-4 to build an AI-powered content moderation system

<p><a data-i13n="cpos:1;pos:1" href="https://www.engadget.com/tag/content-moderation/">Content moderation[/url] has been one of the thorniest issues on the internet for decades. It's a difficult subject matter for anyone to tackle, considering the subjectivity that goes hand-in-hand with figuring out what content should be permissible on a given platform. <a data-i13n="cpos:2;pos:1" href="https://www.engadget.com/tag/chatgpt/">ChatGPT[/url] maker <a data-i13n="cpos:3;pos:1" href="https://www.engadget.com/tag/openai/">OpenAI[/url] thinks it can help and it has been putting <a data-i13n="cpos:4;pos:1" href="https://www.engadget.com/tag/gpt-4/">GPT-4's[/url] content moderation skills to the test. It's using the large multimodal model &quot;to build a content moderation system that is scalable, consistent and customizable.&quot;</p><p>The company <a data-i13n="cpos:5;pos:1" href="https://openai.com/blog/using-gpt-4-for-content-moderation">wrote in a blog post[/url] that GPT-4 can not only help make content moderation decisions, but aid in developing policies and swiftly iterating on policy changes, &quot;reducing the cycle from months to hours.&quot; It claims the model can parse the various regulations and nuances in content policies and instantly adapt to any updates. This, OpenAI claims, results in more consistent labeling of content.</p><span id="end-legacy-contents"></span><p>&quot;We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators, &quot; OpenAI's Lilian Weng, Vik Goel and Andrea Vallone wrote. &quot;Anyone with OpenAI API access can implement this approach to create their own AI-assisted moderation system.&quot; OpenAI claims GPT-4 moderation tools can help companies carry out around six months of work in about a day.</p><div id="c510354c4a27480182ce5b8e2e15f7db"><div style="left:0;width:100%;height:0;position:relative;padding-bottom:56.338%;"><iframe src="//dk79lclgtez2i.cloudfront.net/QGd7Z9J" style="top:0;left:0;width:100%;height:100%;position:absolute;border:0;" allowfullscreen="" scrolling="no"></iframe></div></div><p>It's been well-documented that manually reviewing traumatic content can have a significant impact on human moderators' mental health, particularly when it comes to graphic material. In 2020, Meta <a data-i13n="cpos:6;pos:1" href="https://www.engadget.com/facebook-content-moderators-lawsuit-settlement-212601146.html">agreed[/url] to pay more than 11,000 moderators at least $1,000 each in compensation for mental health issues that may have stemmed from reviewing material that was posted on Facebook.</p><p>Using AI to lift some of the burden from human reviewers could be greatly beneficial. Meta, for one, has been employing AI to <a data-i13n="cpos:7;pos:1" href="https://www.engadget.com/facebook-moderation-ai-machine-learning-prioritize-161027473.html">help moderators[/url] for several years. Yet OpenAI says that, until now, human moderators have received help from &quot;smaller vertical-specific machine learning models. The process is inherently slow and can lead to mental stress on human moderators.&quot;</p><p>AI models are far from perfect. Major companies have long been using AI in their moderation processes and, with or without the aid of the technology, <a data-i13n="cpos:8;pos:1" href="https://www.engadget.com/oversight-board-criticizes-meta-for-refusing-to-take-down-brazilian-pro-insurrection-video-124533251.html">still get big content decisions wrong[/url]. It remains to be seen whether OpenAI's system can avoid many of the major moderation traps we've seen other companies fall into over the years.</p><p>In any case, OpenAI agrees that humans still need to be involved in the process. “We’ve continued to have human review to verify some of the model judgements,” Vallone, who works on OpenAI's policy team, told <a data-i13n="cpos:9;pos:1" href="https://www.bloomberg.com/news/articles/2023-08-15/chatgpt-creator-openai-is-testing-content-moderation-systems?utm_medium=social&amp;utm_source=twitter&amp;cmpid%3D=socialflow-twitter-tech&amp;utm_content=tech&amp;utm_campaign=socialflow-organic&amp;sref=10lNAhZ9">Bloomberg[/url].</p><p>&quot;Judgments by language models are vulnerable to undesired biases that might have been introduced into the model during training. As with any AI application, results and output will need to be carefully monitored, validated and refined by maintaining humans in the loop,&quot; OpenAI's blog post reads. &quot;By reducing human involvement in some parts of the moderation process that can be handled by language models, human resources can be more focused on addressing the complex edge cases most needed for policy refinement.&quot;</p>This article originally appeared on Engadget at https://www.engadget.com/openai-is-using-gpt-4-to-build-an-ai-powered-content-moderation-system-184933519.html?src=rss

Source: OpenAI is using GPT-4 to build an AI-powered content moderation system (https://www.engadget.com/openai-is-using-gpt-4-to-build-an-ai-powered-content-moderation-system-184933519.html?src=rss)