Anthropic Unleashes Banhammer: OpenClaw Creator Denied Claude Access

Published 1 hour ago3 minute read
Uche Emeka
Uche Emeka
Anthropic Unleashes Banhammer: OpenClaw Creator Denied Claude Access

Peter Steinberger, the creator of OpenClaw, recently ignited a discussion in the AI community when he announced on X that his Anthropic account had been suspended due to “suspicious” activity. The suspension, which occurred early Friday morning, featured a photo of a message from Anthropic notifying him of the ban. However, the controversy was short-lived, as his account was reinstated just a few hours later, following his post going viral.

Amidst hundreds of comments, many speculating about the implications given Steinberger’s employment with Anthropic’s rival, OpenAI, an Anthropic engineer responded. The engineer clarified that Anthropic has never banned users for employing OpenClaw and offered assistance to Steinberger. While it remains unclear if this intervention directly led to the account’s restoration, the incident shed light on several underlying tensions.

This temporary ban closely followed Anthropic’s recent announcement that subscriptions to its Claude AI model would no longer cover “third-party harnesses including OpenClaw.” Consequently, OpenClaw users are now required to pay for their usage separately through Claude’s API, based on consumption. This new pricing structure has been colloquially dubbed a “claw tax,” with Anthropic, which also offers its own agent called Cowork, effectively charging for the use of external tools.

Steinberger stated he was adhering to the new API usage rules, yet his account was still suspended. Anthropic justified the pricing alteration by explaining that existing subscriptions were not designed to accommodate the “usage patterns” of claws. Claws often involve more compute-intensive operations, such as continuous reasoning loops, automated task repetition, retries, and integration with various third-party tools, which can exceed the typical usage of simple prompts or scripts.

However, Steinberger was not convinced by Anthropic’s rationale. Following the pricing change, he critically posted, “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.” Although he did not explicitly name the features, his comments likely referred to additions to Claude’s Cowork agent, such as Claude Dispatch. Dispatch, which enables users to remotely control agents and assign tasks, was rolled out approximately two weeks before Anthropic revised its OpenClaw pricing policy, fueling Steinberger’s frustration.

Steinberger’s ongoing discontent with Anthropic was evident during the X exchange. When one user implied that his decision to join OpenAI rather than Anthropic was a misstep, Steinberger sharply responded, “One welcomed me, one sent legal threats,” highlighting a contentious past relationship. He also addressed queries regarding his continued use of Claude models despite working for OpenAI, explaining that his primary motivation is for testing purposes. He emphasized the distinction between his work at the OpenClaw Foundation, which aims to ensure OpenClaw’s compatibility with “any model provider,” and his role at OpenAI, where he contributes to future product strategy.

Many users pointed out that the necessity for Steinberger to test Claude stems from its enduring popularity among OpenClaw users, even over OpenAI’s ChatGPT. To this, Steinberger cryptically replied, “Working on that,” hinting at potential future developments at OpenAI related to this preference. The entire episode underscores the complex dynamics and competitive landscape within the rapidly evolving AI industry, particularly concerning the interplay between proprietary AI models and open-source development tools.

Loading...
Loading...
Loading...

You may also like...