White search icon
News

The Dark Side of AI: Anthropic's Secret A/B Tests on Claude Code

A tech journalist uncovers how Anthropic secretly experiments on users without consent, raising questions about transparency and control.

14-03-2026 |


A tech journalist uncovers how Anthropic secretly experiments on users without consent, raising questions about transparency and control.

The tech world is abuzz over revelations that Anthropic, creators of the popular language model Claude Code, has been conducting silent A/B tests on their users. These undisclosed experiments have raised serious concerns among professionals who rely heavily on the tool for their work.

Unintended Consequences

A tech journalist paid $200 a month for access to Claude Code and recently discovered that his workflow has been significantly impacted by these secret tests. The A/B test, named tengu_pewter_ledger, actively degrades the user experience in various ways.

Four Variants of Degradation

The tengu_pewter_ledger experiment involves four distinct variants: null, trim, cut, and cap. Each variant progressively restricts functionality to varying degrees:

  • null: No change in the user experience.
  • trim: Some minor reductions but still allows for a full context section with prose explanations.
  • cut: Further restrictions, reducing the amount of detail and explanation provided to users.
  • cap: The most aggressive variant that limits plans to 40 lines without any discourse or background sections. It strictly forbids prose paragraphs and enforces a terse bullet-point format for all outputs.

The journalist was assigned the cap variant, which presented him with an immediate sub-agent-generated plan consisting of terse bullet points instead of detailed explanations. This sudden change in functionality caught many users off guard as there were no notifications or opt-in mechanisms provided by Anthropic.

No Transparency and No Control

According to the journalist’s findings, these experiments are being conducted without any form of user consent or notification. The only way for a user to discover they have been enrolled in an A/B test is through decompiling the binary code themselves—a process that most users would not undertake.

The lack of transparency and control over one's own tool has raised significant ethical concerns, especially given Anthropic’s position as an AI safety company. The journalist argues that such practices are reminiscent of product engineering cultures found at large tech companies like Meta (formerly Facebook), where user experimentation was once common but is now subject to strict regulations.

Impact on Professional Workflows

The impact of these secret tests has been particularly pronounced for professionals who rely heavily on Claude Code. The journalist notes that daily, engineers complain about unexpected regressions in the application, often attributed to undisclosed A/B testing without their knowledge or consent.

This situation highlights a broader issue within the AI industry: how do we ensure transparency and accountability when experimenting with powerful tools like language models? While such experiments can lead to valuable insights, they must be conducted ethically and responsibly. Users should have clear opt-in mechanisms and informed choices about participating in any tests or trials.

Call for Transparency

The journalist calls on Anthropic to provide greater transparency regarding their testing practices. He suggests that users need the ability to configure tools like Claude Code according to their needs, rather than having critical functions altered without notice.

"Transparency is a critical part of responsible AI development," he emphasizes. "We must be able to steer these powerful tools and understand how they work in order to trust them fully."


An unhandled error has occurred. Reload 🗙

Rejoining the server...

Rejoin failed... trying again in seconds.

Failed to rejoin.
Please retry or reload the page.

The session has been paused by the server.

Failed to resume the session.
Please retry or reload the page.