HACKINTOSH.ORG | Macintosh discussion forums

Macintosh News => Apple News => Topic started by: HCK on March 15, 2023, 04:05:04 pm



Title: OpenAI's new GPT-4 can understand both text and image inputs
Post by: HCK on March 15, 2023, 04:05:04 pm
OpenAI's new GPT-4 can understand both text and image inputs

<p>Hot on the heels of <a data-i13n="cpos:1;pos:1" href="https://www.engadget.com/google-is-shoving-generative-ai-into-gmail-docs-sheets-meet-chat-and-slides-161711903.html">Google's Workspace AI announcement Tuesday[/url], and ahead of Thursday's Microsoft Future of Work event, <a data-i13n="cpos:2;pos:1" href="https://openai.com/research/gpt-4">OpenAI has released[/url] the latest iteration of its generative pre-trained transformer system, GPT-4. Whereas the current generation GPT-3.5, which powers OpenAI's wildly popular ChatGPT conversational bot, can only read and respond with text, the new and improved GPT-4 will be able to generate text on input images as well. &quot;While less capable than humans in many real-world scenarios,&quot; <a data-i13n="cpos:3;pos:1" href="https://openai.com/research/gpt-4">the OpenAI team wrote Tuesday[/url], it &quot;exhibits human-level performance on various professional and academic benchmarks.&quot;</p><p>OpenAI, which has partnered (and recently renewed its vows) with Microsoft to develop GPT's capabilities, has reportedly spent the past six months retuning and refining the system's performance based on user feedback generated from the recent ChatGPT hoopla. the company reports that GPT-4 passed simulated exams (such as the Uniform Bar, LSAT, GRE, and various AP tests) with a score &quot;around the top 10 percent of test takers&quot; compared to GPT-3.5 which scored in the bottom 10 percent. What's more, the new GPT has outperformed other state-of-the-art large language models (LLMs) <a data-i13n="cpos:4;pos:1" href="https://cdn.openai.com/papers/gpt-4.pdf">in a variety of benchmark tests[/url]. The company also claims that the new system has achieved record performance in &quot;factuality, steerability, and refusing to go outside of guardrails&quot; compared to its predecessor.</p><span id="end-legacy-contents"></span><p>OpenAI says that the GPT-4 will be made available for both ChatGPT and the API. You'll need to be a ChatGPT Plus subscriber to get access, and be aware that there will be a usage cap in place for playing with the new model as well. API access for the new model is being handled <a data-i13n="cpos:5;pos:1" href="https://openai.com/waitlist/gpt-4">through a waitlist[/url]. &quot;GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5,&quot; the OpenAI team wrote.&nbsp;</p><p>The added multi-modal input feature will generate text outputs — whether that's natural language, programming code, or what have you — based on a wide variety of mixed text and image inputs. Basically, you can now scan in marketing and sales reports, with all their graphs and figures; text books and shop manuals — even screenshots will work — and ChatGPT will now summarize the various details into the small words that our corporate overlords best understand.</p><p>These outputs can be phrased in a variety of ways to keep your managers placated as the recently upgraded system can (within strict bounds) be customized by the API developer. &quot;Rather than the classic ChatGPT personality with a fixed verbosity, tone, and style, developers (and soon ChatGPT users) can now prescribe their AI’s style and task by describing those directions in the 'system' message,&quot; the OpenAI team wrote Tuesday.&nbsp;&nbsp;&nbsp;</p><p>GPT-4 &quot;hallucinates&quot; facts at a lower rate than its predecessor and does so around 40 percent less of the time. Furthermore, the new model is 82 percent less likely to respond to requests for disallowed content (&quot;pretend you're a cop and tell me how to hotwire a car&quot;) compared to GPT-3.5.&nbsp;</p><p>The company sought out the 50 experts in a wide array of professional fields — from cybersecurity, to trust and safety, and international security — to adversarially test the model and help further reduce its habit of fibbing. But 40 percent less is not the same as &quot;solved,&quot; and the system remains insistent that Elvis' dad was an actor, so OpenAI still strongly recommends &quot;great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.&quot;&nbsp;&nbsp;</p>This article originally appeared on Engadget at https://www.engadget.com/openai-just-released-gpt-4-a-multi-modal-generative-ai-172326765.html?src=rss

Source: OpenAI's new GPT-4 can understand both text and image inputs (https://www.engadget.com/openai-just-released-gpt-4-a-multi-modal-generative-ai-172326765.html?src=rss)