Multiple AI companies bypassing web standard to scrape publisher sites, licensing firm says
Listen to this News
Multiple artificial intelligence companies are circumventing a common web standard used by publishers to block the scraping of their content for use in generative AI systems, content licensing startup TollBit has told publishers.
A letter to publishers seen by Reuters on Friday, which does not name the AI companies or the publishers affected, comes amid a public dispute between AI search startup Perplexity and media outlet Forbes involving the same web standard and a broader debate between tech and media firms over the value of content in the age of generative AI.
The business media publisher publicly accused Perplexity of plagiarizing its investigative stories in AI-generated summaries without citing Forbes or asking for its permission.
A Wired investigation published this week found Perplexity likely bypassing efforts to block its web crawler via the Robots Exclusion Protocol, or “robots.txt,” a widely accepted standard meant to determine which parts of a site are allowed to be crawled.
Perplexity declined a Reuters request for comment on the dispute.
The News Media Alliance, a trade group representing more than 2,200 U.S.-based publishers, expressed concern about the impact that ignoring “do not crawl” signals could have on its members.
“Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry,” said Danielle Coffey, president of the group.
TollBit, an early-stage startup, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them.
The company tracks AI traffic to the publishers’ websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content.
For example, publishers may opt to set higher rates for “premium content, such as the latest news or exclusive insights,” the company says on its website.
It says it had 50 websites live as of May, though it has not named them.
According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt.
TollBit said its analytics indicate “numerous” AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled.
“What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites,” TollBit wrote. “The more publisher logs we ingest, the more this pattern emerges.”
The robots.txt protocol was created in the mid-1990s as a way to avoid overloading websites with web crawlers. Although there is no clear legal enforcement mechanism, historically there has been widespread compliance on the web and some groups – including the News Media Alliance – say there may yet be legal recourse for publishers.
More recently, robots.txt has become a key tool publishers have used to block tech companies from ingesting their content free-of-charge for use in generative AI systems that can mimic human creativity and instantly summarize articles.
The AI companies use the content both to train their algorithms and to generate summaries of real-time information.
Some publishers, including the New York Times, have sued AI companies for copyright infringement over those uses. Others are signing licensing agreements with the AI companies open to paying for content, although the sides often disagree over the value of the materials. Many AI developers argue they have broken no laws in accessing them for free.
Thomson Reuters, the owner of Reuters News, is among those that have struck deals to license news content for use by AI models.
Source: indianexpress.com
Back to All: Educational News and Resources