Pages: [1]   Go Down
  Print  
Author Topic: Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol  (Read 99 times)
HCK
Global Moderator
Hero Member
*****
Posts: 79425



« on: July 31, 2024, 04:05:06 pm »

Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

<p>Freelancer has accused Anthropic, the AI startup behind the Claude large language models, of ignoring its &quot;do not crawl&quot; robots.txt protocol to scrape its websites' data. Meanwhile, iFixit CEO Kyle Wiens said Anthropic has ignored the website's policy prohibiting the use of its content for AI model training. Matt Barrie, the chief executive of Freelancer, told <a data-i13n="cpos:1;pos:1" href="https://www.ft.com/content/07611b74-3d69-4579-9089-f2fc2af61baa">The Information[/url] that Anthropic's ClaudeBot is &quot;the most aggressive scraper by far.&quot; His website allegedly got 3.5 million visits from the company's crawler within a span of four hours, which is &quot;probably about five times the volume of the number two&quot; AI crawler. Similarly, Wiens <a data-i13n="cpos:2;pos:1" href="https://x.com/kwiens/status/1816304897484284007">posted on X/Twitter[/url] that Anthropic's bot hit iFixit's servers a million times in 24 hours. &quot;You're not only taking our content without paying, you're tying up our devops resources,&quot; he wrote.&nbsp;</p>
<p>Back in June, <a data-i13n="cpos:3;pos:1" href="https://www.engadget.com/ai-companies-are-reportedly-still-scraping-websites-despite-protocols-meant-to-block-them-132308524.html">Wired accused[/url] another AI company, Perplexity, of crawling its website despite the presence of the Robots Exclusion Protocol, or robots.txt. A robots.txt file typically contains instructions for web crawlers on which pages they can and can't access. While compliance is voluntary, it's mostly just been ignored by bad bots. After <a data-i13n="cpos:4;pos:1" href="https://www.wired.com/story/perplexity-is-a-bullshit-machine/">Wired's piece[/url] came out, a startup called TollBit that connects AI firms with content publishers reported that it's not just Perplexity that's bypassing robots.txt signals. While it didn't name names, <a data-i13n="cpos:5;pos:1" href="https://www.businessinsider.com/openai-anthropic-ai-ignore-rule-scraping-web-contect-robotstxt">Business Insider[/url] said it learned that OpenAI and Anthropic were ignoring the protocol, as well.&nbsp;</p>
<span id="end-legacy-contents"></span><p>Barrie said Freelancer tried to refuse the bot's access requests at first, but it ultimately had to block Anthropic's crawler entirely. &quot;This is egregious scraping [which] makes the site slower for everyone operating on it and ultimately affects our revenue,&quot; he added. As for iFixit, Wiens said the website has set alarms for high traffic, and his people got woken up at 3AM due to Anthropic's activities. The company's crawler stopped scraping iFixit after it added a line in its <a data-i13n="cpos:6;pos:1" href="https://www.ifixit.com/robots.txt">robots.txt file[/url] that disallows Anthropic's bot, in particular.&nbsp;</p>
<p>The AI startup told The Information that it respects robots.txt and that its crawler &quot;respected that signal when iFixit implemented it.&quot; It also said that it aims &quot;for minimal disruption by being thoughtful about how quickly [it crawls] the same domains,&quot; which is why it's now investigating the case.&nbsp;</p>
<p>AI firms use crawlers to collect content from websites that they can use to train their generative AI technologies. They've been the <a data-i13n="cpos:7;pos:1" href="https://www.engadget.com/the-new-york-times-is-suing-openai-and-microsoft-for-copyright-infringement-181212615.html">target of multiple lawsuits[/url] as a result, with publishers accusing them of copyright infringement. To prevent more lawsuits from being filed, companies like OpenAI have been striking deals with publishers and websites. OpenAI's content partners, so far, include <a data-i13n="cpos:8;pos:1" href="https://www.engadget.com/openai-will-reportedly-pay-250-million-to-put-news-corps-journalism-in-chatgpt-214615249.html">News Corp[/url], <a data-i13n="cpos:9;pos:1" href="https://www.engadget.com/the-atlantic-and-vox-media-made-their-own-deal-with-the-ai-devil-161017636.html">Vox Media[/url], the <a data-i13n="cpos:10;pos:1" href="https://www.engadget.com/openai-will-train-its-ai-models-on-the-financial-times-journalism-173249177.html">Financial Times[/url] and <a data-i13n="cpos:11;pos:1" href="https://www.engadget.com/openai-strikes-deal-to-put-reddit-posts-in-chatgpt-224133045.html">Reddit[/url]. iFixit's Wiens seems open to the idea of signing a deal for the how-to-repair's website's articles, as well, telling Anthropic in a tweet he's willing to have a conversation about licensing content for commercial use.</p>
<div id="f5a4206e3ef74fe7a391febf1fbde6c4"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">If any of those requests accessed our terms of service, they would have told you that use of our content expressly forbidden. But don't ask me, ask Claude!

If you want to have a conversation about licensing our content for commercial use, we're right here. pic.twitter.com/CAkOQDnLjD</p>— Kyle Wiens (@kwiens) July 24, 2024
 

</div>
<p></p>This article originally appeared on Engadget at https://www.engadget.com/websites-accuse-ai-startup-anthropic-of-bypassing-their-anti-scraping-rules-and-protocol-133022756.html?src=rss

Source: Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol
Logged
Pages: [1]   Go Up
  Print  
 
Jump to: