Pages: [1]   Go Down
  Print  
Author Topic: Meta plans to ramp up labeling of AI-generated images across its platforms  (Read 111 times)
HCK
Global Moderator
Hero Member
*****
Posts: 79425



« on: February 08, 2024, 04:05:05 pm »

Meta plans to ramp up labeling of AI-generated images across its platforms

<p><a data-i13n="cpos:1;pos:1" href="https://www.engadget.com/tag/meta/">Meta[/url] plans to ramp up its labeling of AI-generated images across <a data-i13n="cpos:2;pos:1" href="https://www.engadget.com/tag/facebook/">Facebook[/url], <a data-i13n="cpos:3;pos:1" href="https://www.engadget.com/tag/instagram/">Instagram[/url] and <a data-i13n="cpos:4;pos:1" href="https://www.engadget.com/tag/threads/">Threads[/url] to help make it clear that the visuals are artificial. It's part of a broader push to tamp down misinformation and disinformation, which is particularly significant as we wrangle with the ramifications of generative AI (GAI) in a major election year in the US and other countries.</p>
<p>According to Meta's president of global affairs, Nick Clegg, the company has been working with partners from across the industry to develop standards that include signifiers that an image, video or audio clip has been generated using AI. "Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads," Clegg wrote in a <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:5;pos:1" class="no-affiliate-link" href="https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/" data-original-link="https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/">Meta Newsroom post[/url]. "We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app." Clegg added that, as it expands these capabilities over the next year, Meta expects to learn more about "how people are creating and sharing AI content, what sort of transparency people find most valuable and how these technologies evolve." These will help inform both industry best practices and Meta's own policies, he wrote.</p>
<span id="end-legacy-contents"></span><p>Meta says the tools it's working on will be able to detect invisible signals — namely AI generated information that aligns with the <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:6;pos:1" class="no-affiliate-link" href="https://c2pa.org/specifications/specifications/1.3/specs/C2PA_Specification.html#_digital_signatures" data-original-link="https://c2pa.org/specifications/specifications/1.3/specs/C2PA_Specification.html#_digital_signatures">C2PA[/url] and <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:7;pos:1" class="no-affiliate-link" href="https://iptc.org/standards/photo-metadata/iptc-standard/" data-original-link="https://iptc.org/standards/photo-metadata/iptc-standard/">IPTC[/url] technical standards — at scale. As such, it expects to be able to pinpoint and label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock, all of which are incorporating GAI metadata into images that their products whip up.</p>
<p>As for GAI video and audio, Clegg points out that companies in the space haven't started incorporating invisible signals into those at the same scale that they have images. As such, Meta isn't yet able to detect video and audio that's generated by third-party AI tools. In the meantime, Meta expects users to label such content themselves.</p>
<p>"While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it," Clegg wrote. "We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context."</p>
<p>That said, putting the onus on users to add disclosures and labels to AI-generated video and audio seems like a non-starter. Many of those people will be trying to intentionally deceive others. On top of that, others likely just won't bother or won't be aware of the GAI policies.</p>
<p>In addition, Meta is looking to make it harder for people to alter or remove invisible markers from GAI content. The company's FAIR AI research lab has <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:8;pos:1" class="no-affiliate-link" href="https://ai.meta.com/blog/stable-signature-watermarking-generative-ai/" data-original-link="https://ai.meta.com/blog/stable-signature-watermarking-generative-ai/">developed tech[/url] that "integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled," Clegg wrote. Meta is also working on ways to automatically detect AI-generated material that doesn't have invisible markers.</p>
<p>Meta plans to continue collaborating with industry partners and "remain in a dialogue with governments and civil society" as GAI becomes more prevalent. It believes this is the right approach to handling content that's shared on Facebook, Instagram and Threads for the time being, though it will adjust things if necessary.</p>
<p>One key issue with Meta's approach — at least while it works on ways to automatically detect GAI content that doesn't use the industry-standard invisible markers — is that it requires buy-in from partners. For instance, C2PA has a ledger-style method of authentication. For that to work, both the tools used to create images and the platforms on which they're hosted both need to buy into C2PA.</p>
<p>Meta shared the update on its approach to labeling AI-generated content just a few days after CEO Mark Zuckerberg shed some more light on his company's plans to build general artificial intelligence. He <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:9;pos:1" class="no-affiliate-link" href="https://finance.yahoo.com/news/mark-zuckerberg-explained-meta-crush-004732591.html" data-original-link="https://finance.yahoo.com/news/mark-zuckerberg-explained-meta-crush-004732591.html">noted[/url] that training data is one major advantage Meta has. The company estimates that the photos and videos shared on Facebook and Instagram amount to a dataset that's greater than the Common Crawl. That's a dataset of some 250 billion web pages that has been used to train other AI models. Meta will be able to tap into both, and it doesn't have to share the data it has vacuumed up through Facebook and Instagram with anyone else.</p>
<p>The pledge to more broadly label AI-generated content also comes just one day after Meta's Oversight Board determined that a video that was misleadingly edited to suggest that President Joe Biden repeatedly touched the chest of his granddaughter <a data-i13n="cpos:10;pos:1" href="https://www.engadget.com/maliciously-edited-joe-biden-video-can-stay-on-facebook-metas-oversight-board-says-110042024.html">could stay on the company's platforms[/url]. In fact, Biden simply placed an "I voted" sticker on her shirt after she voted in person for the first time. The board determined that the video was permissible under Meta's rules on manipulated media, but it urged the company to update those community guidelines.</p>This article originally appeared on Engadget at https://www.engadget.com/meta-plans-to-ramp-up-labeling-of-ai-generated-images-across-its-platforms-160234038.html?src=rss

Source: Meta plans to ramp up labeling of AI-generated images across its platforms
Logged
Pages: [1]   Go Up
  Print  
 
Jump to: