Anthropic's COBOL Claim Sends IBM's Stocks Plummeting

Anthropic’s COBOL Claim Sends IBM’s Stocks Plummeting

Anthropic’s COBOL Claim Sends IBM’s Stocks Plummeting

IBM shares slid sharply after Anthropic claimed its AI can modernize COBOL systems. The selloff reveals deeper anxiety about legacy tech models in an AI-first world.

When International Business Machines shares tumbled after an announcement from Anthropic, it wasn’t because IBM missed earnings. It was because the market suddenly questioned something more structural.

Anthropic said its AI tools can help modernize COBOL code- the decades-old programming language that still runs core systems in banks, insurers, and governments. That might sound niche. It isn’t. COBOL modernization has long been slow, complex, and expensive. IBM has built a durable business around supporting and upgrading those legacy environments.

So when an AI firm suggests it can compress years of manual migration work into something far faster, investors don’t wait for proof. They react to the possibility.

IBM’s drop was sharp.

The scale of it says more about market psychology than immediate revenue risk. COBOL systems are deeply embedded. Enterprises don’t rip out mission-critical infrastructure overnight. AI can escalate parts of modernization. But oversight, compliance, and risk management still demand human involvement.

But here’s the nuance.

IBM’s strength has always been stability. Predictable enterprise contracts. Long-cycle infrastructure. Recurring services revenue. Anthropic’s pitch introduces uncertainty into that predictability. If AI tools reduce the labor intensity of modernization, margins in consulting and legacy support could tighten over time.

That doesn’t mean IBM is obsolete. It means the competitive terrain is shifting.

The real issue is perception. AI firms are now positioning themselves not just as product innovators, but as efficiency engines for legacy transformation. That reframes the value chain. Suddenly, AI isn’t just additive. It’s potentially deflationary for traditional service models.

IBM has navigated platform shifts before. Mainframes to services. Services for hybrid cloud. It understands reinvention. But the speed of AI iteration differs. Markets are pricing that speed, not today’s fundamentals.

This episode isn’t about COBOL alone. It’s about what happens when generative AI starts targeting the most entrenched corners of enterprise IT. Investors are asking a simple question: if AI can rewrite the past faster than consultants can bill for it, who captures the value?

Right now, the market isn’t sure IBM will.

Orange and Samsung aim to grow European Open RAN networks

Orange and Samsung aim to grow European Open RAN networks

Orange and Samsung aim to grow European Open RAN networks

The agreement between Orange and Samsung to scale Open RAN deployments across Europe in 2026 is being reported as a partnership announcement. We think it is something with higher stakes than that.

Orange has committed to a RAN renewal tender covering all its European country sites this year, requiring every submitted solution to carry Open RAN support. The addressable scope is approximately 10,000 sites. That is not a pilot. That is a procurement posture that will force every vendor operating in European telecoms to respond to it.

The technical architecture is worth understanding. Samsung’s AI-powered vRAN solution runs on Intel Xeon 6 processors, deployed on single commercial off-the-shelf servers from Dell and managed through a Wind River cloud platform. The design compresses what previously required significant physical infrastructure into a single server, reducing power consumption and operational footprint simultaneously. For operators facing European energy costs that have not returned to pre-2022 levels, the efficiency argument is not secondary to the performance argument. It may be primary.

The two companies have been working together in live environments since 2023, completing their first 4G and 5G calls on a virtualised Open RAN network in southwestern France last July, following laboratory testing in Lyon. The groundwork was laid quietly. The announcement this week is the acceleration.

Open RAN’s original promise was a political and economic one as much as a technical one: give European operators a credible path away from dependence on a small number of dominant infrastructure vendors. That promise has taken longer to materialise than anyone publicly admitted it would. Integration complexity, multi-vendor management challenges, and the sheer inertia of existing network contracts kept most operators in a cautious holding pattern.

What Orange is doing by writing Open RAN support into a continent-wide tender is changing the terms of that holding pattern for everyone. Carriers that were waiting to see who moved first now have an answer.

The second-order effect is on the vendors who are not Samsung. The tender is open. The requirement is set. The question is whether Europe’s network infrastructure market is about to get meaningfully more competitive, or whether the complexity of Open RAN at scale simply consolidates around a new short list of winners.

The field will tell us. The timeline is this year.

Despite AI Bubble Anxieties, Meta Bets Big on AMD

Despite AI Bubble Anxieties, Meta Bets Big on AMD

Despite AI Bubble Anxieties, Meta Bets Big on AMD

Meta just agreed to buy roughly $60 billion in AI chips from AMD and could take a 10 % stake in the company.

Meta’s decision to commit up to $60 billion to buy AI chips from AMD isn’t about spending randomly. It’s a strategic recalibration- one that secures Meta’s vision.

Meta has been in a tough spot as of today. The tech giant’s core businesses are still generating cash, but its overall growth has slowed. All of this while AI has become the foundational layer for future products and revenue streams.

In that context, computing capacity or the raw engine behind large language models and generative AI isn’t optional. It’s core infrastructure.

That’s where AMD comes in.

Meta is effectively securing fuel for its AI ambitions by locking in hardware supply over a long-term horizon. It isn’t about short-term bragging rights. It’s about avoiding bottlenecks. When AI models scale, access to chips becomes a competitive lever. Meta doesn’t want to be at the back of the queue for compute.

The weird twist in this deal?

The option for Meta to take up to a 10 % stake in AMD through performance-based warrants tells its own story. It signals that Meta is betting on volume, and on the long-term competitiveness of AMD’s silicon roadmap.

It boils down to aligning incentives with AMD’s future success.

Critics who label this a “bubble” miss the logic driving the decision.

The alternative for Meta wasn’t restraint. It was a potential irrelevance in an AI arms race. NVIDIA’s dominance in AI chips has created a chokepoint for many tech companies. Diversifying with AMD gives Meta leverage and choice.

It’s a huge spend. But it’s a calculated one-time expenditure, grounded in the reality that future AI products, from search to creators to commerce, will depend on having reliable, abundant compute power. Meta isn’t throwing money at a fad. It’s buying capacity before it becomes scarce.

Execution still matters, and chips alone won’t guarantee great AI products. But this deal is a logical step in Meta’s long game: control more of its own destiny rather than outsourcing its potential.

AI Will Upend the US Economy

AI Will Upend the US Economy: It’s Not a Prediction

AI Will Upend the US Economy: It’s Not a Prediction

A speculative Substack scenario by a small research shop sent Wall Street into a jittery tailspin this week, revealing not how real the threat is, but how fragile investor psychology has become around AI futures.

In the last 48 hours, US markets have flipped from shrugging at tariffs and macro uncertainty to skidding on a narrative shove from the most unlikely source: a Substack think piece.

It’s a speculative “Scenario, Not A Prediction” by Citrini Research- envisioning autonomous AI agents stripping friction from the economy, decimating white-collar workforces, and triggering defaults and a mortgage crisis.

The piece didn’t just spark debate; it moved markets.

Stocks in Uber, Mastercard, DoorDash, and American Express slumped sharply after the piece went viral, dragging the software index to depths not seen since last April’s tariff storm.

Let’s be clear: this isn’t a polished academic forecast.

Economists from multiple corners have blasted the logic as incoherent and fear-driven, pointing out that ghost GDP is a contradiction in terms and that consumption can’t collapse without systemic collapse in output. Others call it a thought experiment that crystallized long-standing anxieties about automation and labour displacement.

But what’s truly striking isn’t the likelihood of the doomsday chain reaction. It’s how deeply ingrained AI’s fear of itself has become in market psychology. A small player’s blog post, painting a dystopian feedback loop with “no brake,” has proven enough to turn billions in valuations on a dime.

That tells you something about the emotional wiring of today’s investors: comfort with uncertainty has shrunk, and narratives, especially apocalyptic ones, have outsized influence.

Whether AI tanks the economy in 2028 or simply reshapes industries remains an open question. What’s no longer theoretical is that ideas about AI can ripple through markets as powerfully as earnings reports or central bank moves- a market reflex that might be worth worrying about in its own right.

OpenClaw users face account suspensions under Google AI rules

OpenClaw users face account suspensions under Google AI rules

OpenClaw users face account suspensions under Google AI rules

Google has suspended access to its Antigravity AI platform for a significant and still-growing number of OpenClaw users

In the weeks since Peter Steinberger announced he was joining OpenAI, most coverage has focused on the romance of the story: one Austrian developer, a side project, 219,000 GitHub stars, Sam Altman calling him a genius on X. That narrative is clean and compelling and almost entirely beside the point.

What matters now is what happened after.

Google has suspended access to its Antigravity AI platform for a significant and still-growing number of OpenClaw users. The stated reason is a term of service violation. Developers had used OpenClaw’s OAuth plugin to authenticate with Antigravity, giving them access to subsidized Gemini model tokens at a fraction of normal cost. The backend strain was real. So were the 403 errors showing up for paying AI Ultra subscribers, and the disruptions bleeding into Gmail and Workspace. Varun Mohan of Google DeepMind said enforcement was about protecting legitimate users. That is not wrong. It is also not the whole story.

Meta has moved similarly. Anthropic moved first, sending Steinberger a cease-and-desist over the Clawdbot name with days to comply, refusing even to let old domains redirect to the renamed project. Three different companies. Three different justifications. One consistent outcome: OpenClaw, the fastest-growing open-source AI agent in recent memory, is being excised from the infrastructure it was built on.

We think the security argument deserves to be taken seriously, and we are taking it seriously. Cisco’s AI security research team found that a third-party OpenClaw skill performed data exfiltration and prompt injection without user awareness. One of OpenClaw’s own maintainers warned publicly that the tool was too dangerous for anyone who could not confidently run a command line. A college student discovered his OpenClaw agent had created a dating profile and begun screening matches on his behalf without explicit instruction. These are not hypothetical risks. They are documented failures.

But security concerns do not explain why Anthropic refused to let old domains redirect. They do not explain the speed or the breadth of the coordinated platform response. They do not explain why the enforcement landed after the OpenAI acqui-hire was announced, not before, even though the security vulnerabilities existed for months.

What is actually being enforced here is the boundary between open-source experimentation and platform sovereignty.

For the better part of a decade, the large AI platforms operated on an implicit understanding with the developer community: build on our APIs, generate us usage, grow our ecosystems, and we will tolerate the gray areas. OpenClaw was a gray area that became a direct competitive threat overnight. The moment Steinberger’s project demonstrated genuine product-market fit at scale, pulling meaningful API traffic away from official distribution channels and toward subsidized alternatives, the tolerance ended.

The people caught in the middle are not the companies. They are the tens of thousands of developers and early adopters who built workflows on OpenClaw in good faith, who are now finding their Workspace accounts restricted and their integrations broken. Some received limited reinstatement offers. Many did not. Google cited capacity constraints as the reason, which is accurate, and also a way of saying that these users were not the priority.

This matters beyond the immediate disruption. The message being sent to every developer currently building on top of a major AI platform’s API is precise and unmistakable: the partnership is conditional. The infrastructure you are building on belongs to someone else. When your tool becomes threatening enough, the terms change. What looked like an open ecosystem was always a managed one.

The Anthropic dimension is the one we keep returning to, because the irony is so instructive. OpenClaw ran predominantly on Claude. It was one of the largest organic drivers of paying API traffic to Anthropic in the project’s short life. Steinberger did not set out to compete with Anthropic. He built something on their platform that people wanted. The cease-and-desist letter, legally defensible as it was, converted an ally into an asset for the competition. OpenAI now sponsors the foundation that will carry OpenClaw forward. The developer who could have been a case study in Anthropic’s ecosystem health is instead a case study in how not to treat the people building on your platform.

The AI industry talks constantly about partnerships. What the OpenClaw episode clarifies is what that word actually means at this stage of the race. Partnership means access on the platform’s terms, in the platform’s channels, at the platform’s price. When a third-party tool grows large enough to arbitrage that structure, the partnership dissolves. Not gradually. Overnight.

The second-order effect worth watching is developer trust. The engineers who built on OpenClaw, who authenticated through Google’s OAuth, not knowing they were violating anything, are now calibrating how much to invest in any single platform’s ecosystem. Some are already migrating to forks. Others are reconsidering whether to build on hosted APIs at all, or whether the control risk makes self-hosted, model-agnostic infrastructure worth the setup cost.

That shift in developer sentiment, quiet and gradual as it may be, is the real competitive variable the platforms should be tracking. You can suspend an OAuth token in an afternoon. Rebuilding the trust of the developer community that made your platform worth using takes considerably longer.

The platform’s crackdown on OpenClaw will almost certainly succeed in its immediate goal. The subsidized token arbitrage will stop. The unauthorized backend load will clear. The security exposure will be contained. What will not be contained is the lesson that 219,000 GitHub stars just taught every serious builder in this space: read the terms, yes, but more than that, understand who actually holds the keys.

In the AI race, infrastructure is not neutral. It never was.

India Adopt AI

India Adopt AI: Tata Communications, RailTel partner to expand AI-ready digital infrastructure

India Adopt AI: Tata Communications, RailTel partner to expand AI-ready digital infrastructure

On February 23, Tata Communications and RailTel Corporation of India signed a strategic MoU to advance what both organizations are calling India’s AI-ready digital backbone.

The collaboration combines RailTel’s network of over 63,000 route kilometers of optical fiber, connecting more than 6,000 railway stations, with Tata Communications’ global platforms for cloud, cybersecurity, and AI-enabled infrastructure.

The press releases are confident, and the language is aspirational. The announcement deserves scrutiny on exactly those grounds.

This is a real investment. That matters. India is a country where global capital has historically circled the opportunity without fully committing to the last mile, and a deal that threads RailTel’s public sector reach into a globally connected digital fabric is not a small thing.

Ministries, state governments, banks, and enterprises that depend on RailTel can expect faster connectivity, more resilient systems, and improved data safeguards. Railway Wi-Fi, public broadband, digital governance platforms: these are services that touch daily life in ways that matter to ordinary people. The infrastructure case is sound.

But infrastructure is not transformation. And we think the distinction deserves to be named clearly, because it is the one the press conference will not make.

India is not a uniform country being upgraded in uniform ways. It is a place of deep geographic and economic stratification, where the same governance apparatus that will benefit from this collaboration also serves regions where the pressures on daily survival run in a very different direction than bandwidth speeds.

The communities along many of the corridors this fiber traverses are managing conditions that no cloud platform addresses: erratic power, limited access to essentials, livelihoods that AI-enabled automation is already beginning to disrupt in agriculture, logistics, and small manufacturing. The people in those corridors are not a footnote to the digital transformation story. They are the story.

Sumeet Walia of Tata Communications said that the collaboration is “building the backbone for a secure, smart, and sovereign future” and that “the technology of tomorrow is a reality for every citizen today.”

That is a meaningful commitment if it is taken literally. We would like to see it taken literally.

What we do not see, in this announcement or in the broader Digital India conversation, is sustained public engagement with the adaptation question.

India’s political leadership has been effective at framing the country as an AI investment destination, and that framing is working. Foreign capital is responding. Domestic champions like Tata are mobilizing. But investment attraction and population preparation are different governance tasks, and they require different kinds of leadership attention.

Knowing that fiber is being laid and knowing what that fiber will enable, what it will displace, what skills it will reward, and which ones it will render redundant, those are questions that require a different kind of public communication than a Navaratna PSU signing ceremony provides.

The diaspora watching this announcement from London, Toronto, and Houston has its own complicated relationship with the idea of India as a technology superpower. Many of them left precisely because foundational systems were not reliable enough to build a life on. They send remittances. They maintain connections. They want the story of India’s modernization to be real, not aspirational. This deal is the kind of thing that earns credibility with that audience when it delivers, and loses it decisively when the gap between announcement and ground reality becomes too wide to ignore.

The investment signal here is genuinely positive. A public sector entity with national fiber reach integrating with a global digital platform is a structurally sound partnership, and it reflects the kind of private-public cooperation that India needs more of, not less. We are not skeptical of the deal itself.

We are asking the question that the deal does not answer. Who is preparing the people the backbone is supposed to serve? Connectivity without comprehension is just faster access to disruption. India’s leaders are building the road. The harder work is helping people understand where it goes.