Pages: [1]   Go Down
  Print  
Author Topic: Researchers found child abuse material in the largest AI image generation dataset  (Read 150 times)
HCK
Global Moderator
Hero Member
*****
Posts: 79425



« on: December 25, 2023, 04:05:06 pm »

Researchers found child abuse material in the largest AI image generation dataset

<p>Researchers from the Stanford Internet Observatory say that a dataset used to train AI image generation tools contains at least 1,008 validated instances of child sexual abuse material. The Stanford researchers note that the presence of CSAM in the dataset could allow AI models that were trained on the data to generate new and even realistic instances of CSAM.</p>
<p>LAION, the non-profit that created the dataset, told <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1" class="no-affiliate-link" href="https://www.404media.co/laion-datasets-removed-stanford-csam-child-abuse/" data-original-link="https://www.404media.co/laion-datasets-removed-stanford-csam-child-abuse/">404 Media[/url] that it "has a zero tolerance policy for illegal content and in an abundance of caution, we are temporarily taking down the LAION datasets to ensure they are safe before republishing them." The organization added that, before publishing its datasets in the first place, it created filters to detect and remove illegal content from them. However, 404 points out that LAION leaders have been aware since at least 2021 that there was a possibility of their systems picking up CSAM as they vacuumed up billions of images from the internet.&nbsp;</p>
<span id="end-legacy-contents"></span><p><a data-i13n="cpos:2;pos:1" href="https://www.bloomberg.com/news/features/2023-04-24/a-high-school-teacher-s-free-image-database-powers-ai-unicorns?leadSource=uverify%20wall&amp;sref=10lNAhZ9">According to previous reports[/url], the LAION-5B dataset in question contains "millions of images of pornography, violence, child nudity, racist memes, hate symbols, copyrighted art and works scraped from private company websites." Overall, it includes more than 5 billion images and associated descriptive captions (the dataset itself doesn't include any images but rather links to scraped images and alt text). LAION founder Christoph Schuhmann said earlier this year that while he was not aware of any CSAM in the dataset, he hadn't examined the data in great depth.</p>
<p>It's illegal for most institutions in the US to view CSAM for verification purposes. As such, the Stanford researchers used several techniques to look for potential CSAM. According to <a data-i13n="cpos:3;pos:1" href="https://stacks.stanford.edu/file/druid:kh752sm9123/ml_training_data_csam_report-2023-12-20.pdf">their paper[/url], they employed "perceptual hash‐based detection, cryptographic hash‐based detection, and nearest‐neighbors analysis leveraging the image embeddings in the dataset itself." They found 3,226 entries that contained suspected CSAM. Many of those images were confirmed as CSAM by third parties such as PhotoDNA and the Canadian Centre for Child Protection.</p>
<p>Stability AI founder Emad Mostaque trained <a data-i13n="cpos:4;pos:1" href="https://www.engadget.com/tag/stable-diffusion/">Stable Diffusion[/url] using a subset of LAION-5B data. The first research version of Google's Imagen text-to-image model was <a data-i13n="cpos:5;pos:1" href="https://www.engadget.com/google-imagen-text-to-image-ai-unprecedented-photorealism-144205123.html">trained on[/url] LAION-400M, but that was never released; Google says that none of the following iterations of Imagen use any LAION datasets. A Stability AI spokesperson told <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:6;pos:1" class="no-affiliate-link" href="https://www.bloomberg.com/news/articles/2023-12-20/large-ai-dataset-has-over-1-000-child-abuse-images-researchers-find" data-original-link="https://www.bloomberg.com/news/articles/2023-12-20/large-ai-dataset-has-over-1-000-child-abuse-images-researchers-find">Bloomberg[/url]&nbsp;that it prohibits the use of its test-to-image systems for illegal purposes, such as creating or editing CSAM.“This report focuses on the LAION-5B dataset as a whole,” the spokesperson said. “Stability AI models were trained on a filtered subset of that dataset. In addition, we fine-tuned these models to mitigate residual behaviors.”</p>
<p>Stable Diffusion 2 (a more recent version of Stability AI's image generation tool) was trained on data that substantially filtered out 'unsafe' materials from the dataset. That, Bloomberg notes, makes it more difficult for users to generate explicit images. However, it's claimed that Stable Diffusion 1.5, which is still available on the internet, does not have the same protections. "Models based on Stable Diffusion 1.5 that have not had safety measures applied to them should be deprecated and distribution ceased where feasible," the Stanford paper's authors wrote.</p>
<p><strong>Correction, 4:30PM ET:</strong> This story originally stated that Google's Imagen tool used a subset of LAION-5B data. The story has been updated to note that Imagen used LAION-400M in its first research version, but hasn't used any LAION data since then. We apologize for the error.</p>This article originally appeared on Engadget at https://www.engadget.com/researchers-found-child-abuse-material-in-the-largest-ai-image-generation-dataset-154006002.html?src=rss

Source: Researchers found child abuse material in the largest AI image generation dataset
Logged
Pages: [1]   Go Up
  Print  
 
Jump to: