OpenClaw users face account suspensions under Google AI rules

Google has suspended access to its Antigravity AI platform for a significant and still-growing number of OpenClaw users

In the weeks since Peter Steinberger announced he was joining OpenAI, most coverage has focused on the romance of the story: one Austrian developer, a side project, 219,000 GitHub stars, Sam Altman calling him a genius on X. That narrative is clean and compelling and almost entirely beside the point.

What matters now is what happened after.

Google has suspended access to its Antigravity AI platform for a significant and still-growing number of OpenClaw users. The stated reason is a term of service violation. Developers had used OpenClaw’s OAuth plugin to authenticate with Antigravity, giving them access to subsidized Gemini model tokens at a fraction of normal cost. The backend strain was real. So were the 403 errors showing up for paying AI Ultra subscribers, and the disruptions bleeding into Gmail and Workspace. Varun Mohan of Google DeepMind said enforcement was about protecting legitimate users. That is not wrong. It is also not the whole story.

Meta has moved similarly. Anthropic moved first, sending Steinberger a cease-and-desist over the Clawdbot name with days to comply, refusing even to let old domains redirect to the renamed project. Three different companies. Three different justifications. One consistent outcome: OpenClaw, the fastest-growing open-source AI agent in recent memory, is being excised from the infrastructure it was built on.

We think the security argument deserves to be taken seriously, and we are taking it seriously. Cisco’s AI security research team found that a third-party OpenClaw skill performed data exfiltration and prompt injection without user awareness. One of OpenClaw’s own maintainers warned publicly that the tool was too dangerous for anyone who could not confidently run a command line. A college student discovered his OpenClaw agent had created a dating profile and begun screening matches on his behalf without explicit instruction. These are not hypothetical risks. They are documented failures.

But security concerns do not explain why Anthropic refused to let old domains redirect. They do not explain the speed or the breadth of the coordinated platform response. They do not explain why the enforcement landed after the OpenAI acqui-hire was announced, not before, even though the security vulnerabilities existed for months.

What is actually being enforced here is the boundary between open-source experimentation and platform sovereignty.

For the better part of a decade, the large AI platforms operated on an implicit understanding with the developer community: build on our APIs, generate us usage, grow our ecosystems, and we will tolerate the gray areas. OpenClaw was a gray area that became a direct competitive threat overnight. The moment Steinberger’s project demonstrated genuine product-market fit at scale, pulling meaningful API traffic away from official distribution channels and toward subsidized alternatives, the tolerance ended.

The people caught in the middle are not the companies. They are the tens of thousands of developers and early adopters who built workflows on OpenClaw in good faith, who are now finding their Workspace accounts restricted and their integrations broken. Some received limited reinstatement offers. Many did not. Google cited capacity constraints as the reason, which is accurate, and also a way of saying that these users were not the priority.

This matters beyond the immediate disruption. The message being sent to every developer currently building on top of a major AI platform’s API is precise and unmistakable: the partnership is conditional. The infrastructure you are building on belongs to someone else. When your tool becomes threatening enough, the terms change. What looked like an open ecosystem was always a managed one.

The Anthropic dimension is the one we keep returning to, because the irony is so instructive. OpenClaw ran predominantly on Claude. It was one of the largest organic drivers of paying API traffic to Anthropic in the project’s short life. Steinberger did not set out to compete with Anthropic. He built something on their platform that people wanted. The cease-and-desist letter, legally defensible as it was, converted an ally into an asset for the competition. OpenAI now sponsors the foundation that will carry OpenClaw forward. The developer who could have been a case study in Anthropic’s ecosystem health is instead a case study in how not to treat the people building on your platform.

The AI industry talks constantly about partnerships. What the OpenClaw episode clarifies is what that word actually means at this stage of the race. Partnership means access on the platform’s terms, in the platform’s channels, at the platform’s price. When a third-party tool grows large enough to arbitrage that structure, the partnership dissolves. Not gradually. Overnight.

The second-order effect worth watching is developer trust. The engineers who built on OpenClaw, who authenticated through Google’s OAuth, not knowing they were violating anything, are now calibrating how much to invest in any single platform’s ecosystem. Some are already migrating to forks. Others are reconsidering whether to build on hosted APIs at all, or whether the control risk makes self-hosted, model-agnostic infrastructure worth the setup cost.

That shift in developer sentiment, quiet and gradual as it may be, is the real competitive variable the platforms should be tracking. You can suspend an OAuth token in an afternoon. Rebuilding the trust of the developer community that made your platform worth using takes considerably longer.

The platform’s crackdown on OpenClaw will almost certainly succeed in its immediate goal. The subsidized token arbitrage will stop. The unauthorized backend load will clear. The security exposure will be contained. What will not be contained is the lesson that 219,000 GitHub stars just taught every serious builder in this space: read the terms, yes, but more than that, understand who actually holds the keys.

In the AI race, infrastructure is not neutral. It never was.

SHARE THIS NEWS

Facebook
Twitter
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *