OpenClaw users face account suspensions under Google AI rules

OpenClaw users face account suspensions under Google AI rules

OpenClaw users face account suspensions under Google AI rules

Google has suspended access to its Antigravity AI platform for a significant and still-growing number of OpenClaw users

In the weeks since Peter Steinberger announced he was joining OpenAI, most coverage has focused on the romance of the story: one Austrian developer, a side project, 219,000 GitHub stars, Sam Altman calling him a genius on X. That narrative is clean and compelling and almost entirely beside the point.

What matters now is what happened after.

Google has suspended access to its Antigravity AI platform for a significant and still-growing number of OpenClaw users. The stated reason is a term of service violation. Developers had used OpenClaw’s OAuth plugin to authenticate with Antigravity, giving them access to subsidized Gemini model tokens at a fraction of normal cost. The backend strain was real. So were the 403 errors showing up for paying AI Ultra subscribers, and the disruptions bleeding into Gmail and Workspace. Varun Mohan of Google DeepMind said enforcement was about protecting legitimate users. That is not wrong. It is also not the whole story.

Meta has moved similarly. Anthropic moved first, sending Steinberger a cease-and-desist over the Clawdbot name with days to comply, refusing even to let old domains redirect to the renamed project. Three different companies. Three different justifications. One consistent outcome: OpenClaw, the fastest-growing open-source AI agent in recent memory, is being excised from the infrastructure it was built on.

We think the security argument deserves to be taken seriously, and we are taking it seriously. Cisco’s AI security research team found that a third-party OpenClaw skill performed data exfiltration and prompt injection without user awareness. One of OpenClaw’s own maintainers warned publicly that the tool was too dangerous for anyone who could not confidently run a command line. A college student discovered his OpenClaw agent had created a dating profile and begun screening matches on his behalf without explicit instruction. These are not hypothetical risks. They are documented failures.

But security concerns do not explain why Anthropic refused to let old domains redirect. They do not explain the speed or the breadth of the coordinated platform response. They do not explain why the enforcement landed after the OpenAI acqui-hire was announced, not before, even though the security vulnerabilities existed for months.

What is actually being enforced here is the boundary between open-source experimentation and platform sovereignty.

For the better part of a decade, the large AI platforms operated on an implicit understanding with the developer community: build on our APIs, generate us usage, grow our ecosystems, and we will tolerate the gray areas. OpenClaw was a gray area that became a direct competitive threat overnight. The moment Steinberger’s project demonstrated genuine product-market fit at scale, pulling meaningful API traffic away from official distribution channels and toward subsidized alternatives, the tolerance ended.

The people caught in the middle are not the companies. They are the tens of thousands of developers and early adopters who built workflows on OpenClaw in good faith, who are now finding their Workspace accounts restricted and their integrations broken. Some received limited reinstatement offers. Many did not. Google cited capacity constraints as the reason, which is accurate, and also a way of saying that these users were not the priority.

This matters beyond the immediate disruption. The message being sent to every developer currently building on top of a major AI platform’s API is precise and unmistakable: the partnership is conditional. The infrastructure you are building on belongs to someone else. When your tool becomes threatening enough, the terms change. What looked like an open ecosystem was always a managed one.

The Anthropic dimension is the one we keep returning to, because the irony is so instructive. OpenClaw ran predominantly on Claude. It was one of the largest organic drivers of paying API traffic to Anthropic in the project’s short life. Steinberger did not set out to compete with Anthropic. He built something on their platform that people wanted. The cease-and-desist letter, legally defensible as it was, converted an ally into an asset for the competition. OpenAI now sponsors the foundation that will carry OpenClaw forward. The developer who could have been a case study in Anthropic’s ecosystem health is instead a case study in how not to treat the people building on your platform.

The AI industry talks constantly about partnerships. What the OpenClaw episode clarifies is what that word actually means at this stage of the race. Partnership means access on the platform’s terms, in the platform’s channels, at the platform’s price. When a third-party tool grows large enough to arbitrage that structure, the partnership dissolves. Not gradually. Overnight.

The second-order effect worth watching is developer trust. The engineers who built on OpenClaw, who authenticated through Google’s OAuth, not knowing they were violating anything, are now calibrating how much to invest in any single platform’s ecosystem. Some are already migrating to forks. Others are reconsidering whether to build on hosted APIs at all, or whether the control risk makes self-hosted, model-agnostic infrastructure worth the setup cost.

That shift in developer sentiment, quiet and gradual as it may be, is the real competitive variable the platforms should be tracking. You can suspend an OAuth token in an afternoon. Rebuilding the trust of the developer community that made your platform worth using takes considerably longer.

The platform’s crackdown on OpenClaw will almost certainly succeed in its immediate goal. The subsidized token arbitrage will stop. The unauthorized backend load will clear. The security exposure will be contained. What will not be contained is the lesson that 219,000 GitHub stars just taught every serious builder in this space: read the terms, yes, but more than that, understand who actually holds the keys.

In the AI race, infrastructure is not neutral. It never was.

India Adopt AI

India Adopt AI: Tata Communications, RailTel partner to expand AI-ready digital infrastructure

India Adopt AI: Tata Communications, RailTel partner to expand AI-ready digital infrastructure

On February 23, Tata Communications and RailTel Corporation of India signed a strategic MoU to advance what both organizations are calling India’s AI-ready digital backbone.

The collaboration combines RailTel’s network of over 63,000 route kilometers of optical fiber, connecting more than 6,000 railway stations, with Tata Communications’ global platforms for cloud, cybersecurity, and AI-enabled infrastructure.

The press releases are confident, and the language is aspirational. The announcement deserves scrutiny on exactly those grounds.

This is a real investment. That matters. India is a country where global capital has historically circled the opportunity without fully committing to the last mile, and a deal that threads RailTel’s public sector reach into a globally connected digital fabric is not a small thing.

Ministries, state governments, banks, and enterprises that depend on RailTel can expect faster connectivity, more resilient systems, and improved data safeguards. Railway Wi-Fi, public broadband, digital governance platforms: these are services that touch daily life in ways that matter to ordinary people. The infrastructure case is sound.

But infrastructure is not transformation. And we think the distinction deserves to be named clearly, because it is the one the press conference will not make.

India is not a uniform country being upgraded in uniform ways. It is a place of deep geographic and economic stratification, where the same governance apparatus that will benefit from this collaboration also serves regions where the pressures on daily survival run in a very different direction than bandwidth speeds.

The communities along many of the corridors this fiber traverses are managing conditions that no cloud platform addresses: erratic power, limited access to essentials, livelihoods that AI-enabled automation is already beginning to disrupt in agriculture, logistics, and small manufacturing. The people in those corridors are not a footnote to the digital transformation story. They are the story.

Sumeet Walia of Tata Communications said that the collaboration is “building the backbone for a secure, smart, and sovereign future” and that “the technology of tomorrow is a reality for every citizen today.”

That is a meaningful commitment if it is taken literally. We would like to see it taken literally.

What we do not see, in this announcement or in the broader Digital India conversation, is sustained public engagement with the adaptation question.

India’s political leadership has been effective at framing the country as an AI investment destination, and that framing is working. Foreign capital is responding. Domestic champions like Tata are mobilizing. But investment attraction and population preparation are different governance tasks, and they require different kinds of leadership attention.

Knowing that fiber is being laid and knowing what that fiber will enable, what it will displace, what skills it will reward, and which ones it will render redundant, those are questions that require a different kind of public communication than a Navaratna PSU signing ceremony provides.

The diaspora watching this announcement from London, Toronto, and Houston has its own complicated relationship with the idea of India as a technology superpower. Many of them left precisely because foundational systems were not reliable enough to build a life on. They send remittances. They maintain connections. They want the story of India’s modernization to be real, not aspirational. This deal is the kind of thing that earns credibility with that audience when it delivers, and loses it decisively when the gap between announcement and ground reality becomes too wide to ignore.

The investment signal here is genuinely positive. A public sector entity with national fiber reach integrating with a global digital platform is a structurally sound partnership, and it reflects the kind of private-public cooperation that India needs more of, not less. We are not skeptical of the deal itself.

We are asking the question that the deal does not answer. Who is preparing the people the backbone is supposed to serve? Connectivity without comprehension is just faster access to disruption. India’s leaders are building the road. The harder work is helping people understand where it goes.

CrowdStrike and Datadog Stocks Take a Hit After Anthropic Launches Its Own Security Tool

CrowdStrike and Datadog Stocks Take a Hit After Anthropic Launches Its Own Security Tool

CrowdStrike and Datadog Stocks Take a Hit After Anthropic Launches Its Own Security Tool

If AI starts automating code scanning and patch suggestions, will the cybersecurity sector shrink? Or will it grow because enterprise risk still needs humans and hardened systems?

Cybersecurity names like CrowdStrike and Datadog slid sharply this week after investors reacted to a new AI-powered security tool from Anthropic. Shares of both dropped around 10–11 % as traders weighed the implications.

Other defenders, such as Zscaler, Fortinet, and Okta, also lost ground. The market’s mood was clear: AI might eat into the cybersecurity pie. Even stalwarts like Palo Alto Networks and SentinelOne saw their stocks soften.

The trigger was Claude Code Security, a feature built into Anthropic’s AI that scans open-source code for vulnerabilities and suggests fixes. That sounds useful. But critics assert that it doesn’t replace real-time protection or operational security. It’s not catching active attacks or spotting live intrusions.

Here’s the conversational takeaway: the sell-off feels more fear-driven than fact-driven.

Analysts stated that companies like CrowdStrike and Datadog still run real-world security systems that customers pay for daily. The AI tool is cool on paper, but it doesn’t yet do the heavy lifting required across enterprise firewalls and networks.

Investors often move before fundamentals change. When a shiny AI story hits headlines, traders tend to sell first and ask questions later. It seems to be exactly what happened here.

It’s worth noting that cybersecurity demand isn’t going away. If anything, digital threats escalate. AI might add tools to the defender’s toolkit, but it also gives attackers new ways to probe systems and exploit vulnerabilities. That could increase the need for services from established vendors rather than reduce it.

The punch?

The market is punishing stocks based on potential future disruption, not actual erosion of sales or customer base.

If today’s drop is the fear of AI, the real test will be whether customers keep spending on tried-and-true cybersecurity products. Investors should assess earnings and enterprise contracts more than hype around new tools.

Data centers

While Data Centers Hamper Quality of Life, Amazon Plans to Invest $12 billion in Another Buildout

While Data Centers Hamper Quality of Life, Amazon Plans to Invest $12 billion in Another Buildout

AI’s future could depend on massive physical infrastructure as much as clever algorithms. Who wins (and who gets left behind) may come down to who builds the backbone, not just writes the code.

Amazon just announced it will spend $12 billion on new data center campuses in northwest Louisiana. These facilities will host cloud services and support artificial intelligence workloads.

The investment is real and heavy.

The campuses will be built in Caddo and Bossier Parishes. Amazon says this will create over 540 full-time data center jobs plus around 1,700 roles tied to operations, such as electricians and technicians.

This money isn’t about small upgrades.

It’s part of Amazon’s massive expansion of AI and cloud infrastructure that includes an expected $200 billion in capital spending this year. That’s more than any other big cloud rivals as they race to handle AI demands.

Here’s what Amazon is selling: growth, jobs, and local investment. The company also suggests sustainability moves such as using surplus water and natural air to cool equipment and pledging funds for local water infrastructure.

But there’s another side. Wall Street has been uneasy about big tech capital spending. Amazon’s shares have dipped after revealing these hefty outlays, asserting that investors prioritize immediate returns as much as long-term infrastructure plans.

There’s also an economic angle beyond Amazon.

Often seen as outside the main tech hubs, Louisiana will now become a key node in U.S. AI infrastructure. It has it all- reliable power, a competitive business environment, and local workforce incentives.

Here’s the punch: this isn’t just cloud sprawl.

It’s a physical backbone for the next pulse of AI services. Whoever controls compute at scale will shape how AI is deployed across industries. Amazon isn’t politely joining that race.

It’s investing in major capital with a long-term vision.

UK’s Data Centre

UK’s Data Centre Boom Could Break the Grid, and That’s a Big Problem

UK’s Data Centre Boom Could Break the Grid, and That’s a Big Problem

Is Britain ready to power the future of AI if that future also risks overwhelming the grid and slowing down the clean energy transition?

A new warning from the UK’s energy regulator, Ofgem, is turning heads.

Around 140 proposed data centres could demand about 50 GW of electricity at peak times. That’s more than the entire country currently uses at once. That means these facilities could almost double Britain’s peak power demand.

Data centres aren’t small.

They house vast banks of servers that power cloud computing, streaming, and increasingly, AI workloads. These machines need constant electricity supplies. That’s where the stress hits home. The UK grid was not built for this kind of load surge.

Ofgem is now worried about the grid’s ability to keep up while still supporting other national priorities.

This push isn’t just theoretical.

Ofgem says many grid connection requests from data centre developers might not be financially sustainable. That raises a real question: who pays for upgrades? The regulator is considering stricter rules and upfront connection costs to help companies build and fund their own links to the grid.

Why does this matter beyond power bills? Because it touches on two significant national goals simultaneously-

  • keeping the lights on without big power cuts
  • hitting climate goals by 2030

And half of the UK’s electricity comes from renewables. But they need time and space to expand. If data centre demand swamps the grid first, there’s a real chance the country relies on fossil fuels to meet spikes in consumption.

That threatens the decarbonization effort and could potentially slow down the rollout of renewable projects.

There’s also political friction. Some lawmakers and industry voices now say the UK needs a national conversation about data centre growth before it outpaces infrastructure planning. Others push for smarter grid pricing and effective use of AI and storage to manage demand.

This isn’t a simple tech problem. It’s about energy security, climate commitments, and where the UK’s economy chooses to grow, fast or sustainably.

The Saaspocalypse Is Upon Us, And OpenAI’s Latest Enterprise Push Might Be the Trigger

The Saaspocalypse Is Upon Us, And OpenAI’s Latest Enterprise Push Might Be the Trigger

The Saaspocalypse Is Upon Us, And OpenAI’s Latest Enterprise Push Might Be the Trigger

Enterprise AI adoption has been quite slow. It’s the lack of tangible returns that to blame. Would OpenAI’s direct to enterprise pipeline change that?

The AI powerhouse (which has been struggling for quite some time now) announces multi-year partnerships- the Frontier Alliances. But unlike the B2B tech partnerships making the rounds, this is quite a 180-degree pivot.

It’s not another tech company. But four global consulting groups we’re all aware of: BCG, McKinsey, Accenture, and Capgemini.

For a spectator, it might just be a strategy for rebuilding. And it might as well be. But for those who witnessed the slippery slope the AI lab has been walking on? It’s a silver lining. OpenAI is invested in experimenting with different approaches to adopt its own tech.

But it’s not merely about adoption. It’s about consulting clients to revamp their strategies in and around AI- because it’s obvious OpenAI isn’t interested in just coaxing enterprises to attach AI to their existing stack.

These consulting giants are designing practices dedicated to OpenAI. To pitch AI, not as a feature, but as the lead architect? It’s a calculated move.

But it’s also a realization: AI alone isn’t enough. Transformation demands a strategy led by a vision.

And with the Frontier Alliance, OpenAI might be keen on becoming the vehicle to turn that vision into a reality.