UK News Publisher Enforces Access Controls Against Unwanted Automated Traffic

Case Study

Background

A large UK-based news publisher operates multiple high-traffic digital properties with a strong focus on original reporting and investigative journalism. Over time, the publisher observed a steady increase in automated access from AI crawlers, scrapers, and headless browsers—many of which ignored declared access rules and placed growing strain on infrastructure.

While a robots.txt policy was in place, the publisher lacked reliable visibility into which actors complied with it and which did not. Existing tooling provided aggregated bot metrics but no practical way to distinguish compliant automation from unwanted access at a technical level.

Challenge

The publisher faced three core challenges:

  • Lack of enforceability: Access rules were declarative but not technically enforced

  • Limited attribution: Automated traffic could not be reliably classified or traced to specific behaviors

  • Operational risk: Any mitigation had to avoid SEO impact, false positives, or disruption to legitimate users

Additionally, legal and editorial stakeholders required clear evidence of non-compliant access before taking further action.

Solution

Centinel was deployed in monitoring mode at the edge, allowing the publisher to analyze automated traffic in real time without blocking.

Using behavioral signals, fingerprinting, and protocol-level analysis, Centinel provided a detailed breakdown of non-human traffic, including:

  • AI crawlers and large-scale scraping frameworks

  • Headless browser automation mimicking real user behavior

  • Repeated access patterns inconsistent with declared crawler identities

After a review phase with engineering and legal teams, the publisher enabled targeted enforcement rules against clearly non-compliant traffic while allowing compliant crawlers to continue accessing content.

Results

Within weeks of enforcement:

  • A significant portion of unwanted automated requests was blocked

  • Infrastructure load from abusive traffic was reduced without performance regressions

  • Editorial and legal teams gained concrete evidence of access violations

  • Search visibility and legitimate discovery remained unaffected

Importantly, the publisher was able to move from a policy-based stance to technical enforcement, backed by measurable data.

Outcome

Centinel became a permanent part of the publisher’s traffic control stack, providing ongoing visibility and enforcement against evolving bot behavior. The publisher now uses Centinel not only to block unwanted access, but also to inform internal discussions around AI usage, licensing, and content governance.