ย ย 

Anthropic temporarily banned OpenClaw’s creator from accessing Claude

ยท

ยท

โ€œYeah folks, itโ€™s gonna be harder in the future to ensure OpenClaw still works with Anthropic models,โ€ OpenClaw creator Peter Steinberger posted on X early Friday morning, along with a photo of a message from Anthropic saying his account had been suspended over โ€œsuspiciousโ€ activity.

The ban didnโ€™t last long. A few hours later, after the post went viral, Steinberger said his account had been reinstated. Among hundreds of comments โ€” many of them in conspiracy theory land, given that Steinberger is now employed by Anthropic rival OpenAI โ€” was one by an Anthropic engineer. The engineer told the famed developer that Anthropic has never banned anyone for using OpenClaw and offered to help.

Itโ€™s not clear if that was the key that restored the account. (Weโ€™ve asked Anthropic about it.) But the whole message string was enlightening on many levels.

To recap the recent history: this ban followed news last week that subscriptions to Anthropicโ€™s Claude would no longer cover โ€œthird-party harnesses including OpenClaw,โ€ the AI model company said.

OpenClaw users now have to pay for that usage separately, based on consumption, through Claudeโ€™s API. In essence, Anthropic, which offers its own agent Cowork, is now charging a โ€œclaw tax.โ€ Steinberger said he was following this new rule and using his API, but was banned anyway.

Anthropic said it instituted the pricing change because subscriptions werenโ€™t built to handle the โ€œusage patternsโ€ of claws. Claws can be more compute-intensive than prompts or simple scripts because they may run continuous reasoning loops, automatically repeat or retry tasks, and tie into a lot of other third-party tools.

Steinberger, however, wasnโ€™t buying that excuse. After Anthropic changed the pricing, he posted, โ€œFunny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.โ€ Though he didnโ€™t specify, he may have been referring to features added to Claudeโ€™s agent Cowork, such as Claude Dispatch, which lets users remotely control agents and assign tasks. Dispatch rolled out a couple of weeks before Anthropic changed its OpenClaw pricing policy.

Steinbergerโ€™s frustration with Anthropic was again on display Friday.

One person implied that some of this is on him, for taking a job at OpenAI instead of Anthropic, posting โ€œYou had the choice, but you went to the wrong one.โ€ To which Steinberger replied: โ€œOne welcomed me, one sent legal threats.โ€

Ouch.

When multiple people asked him why heโ€™s using Claude instead of his employerโ€™s models at all, he explained that he only uses it for testing, to ensure updates to OpenClaw wonโ€™t break things for Claude users.

He explained: โ€œYou need to separate two things. My work at the OpenClaw Foundation where we wanna make OpenClaw work great for *any* model provider, and my job at OpenAI to help them with future product strategy.โ€

Multiple people also pointed out that the need to test Claude is because that model remains a popular choice for OpenClaw users over ChatGPT. He also heard that when Anthropic changed its pricing, to which he replied: โ€œWorking on that.โ€ (So, thatโ€™s a clue about what his job at OpenAI entails.)

Steinberger did not respond to a request for comment.



Leave a Reply

Your email address will not be published. Required fields are marked *