Here is where things stand. The US Defense Department designated Anthropic a national-security supply-chain risk after the company refused to allow its Claude models to be used for military surveillance and autonomous weapons.
A federal judge blocked the designation, ruling it likely violated constitutional protections. The Trump administration is now appealing that ruling. The President, separately, called Anthropic’s leadership “leftwing nut jobs” for holding that line.
Into that opening, Britain moved quickly.
The UK government is courting Anthropic with proposals that include expanding its London office footprint and pursuing a dual listing on the London Stock Exchange. Officials at the Department for Science, Innovation and Technology have drafted the proposals for Anthropic CEO Dario Amodei, who visits Britain in late May on a European customer and policy tour. Downing Street is backing the effort. London Mayor Sadiq Khan followed up in writing, pitching the capital as a “steadfast” base for the company. The FT broke the story on Sunday.
The proposal on the table is part expansion offer, part diplomatic signal. Britain wants Anthropic in London. It also wants to be seen wanting Anthropic in London, which is a different thing and equally intentional.
The honest subtext, acknowledged privately by officials, is that Britain has no homegrown frontier lab to rival the Americans. The strategy is partnership, not competition. The goal is to tie the best US labs to UK infrastructure, research base, and talent pipeline before other European capitals do. OpenAI has already committed to making London its largest research hub outside the US. Google is completing a roughly £1 billion King’s Cross campus. The Anthropic pitch fits a pattern.
But this story is not really about office space or stock listings. Those are instruments. The story is about what a government does when a private company refuses a government’s demand and gets punished for it, and another government decides that refusal is an asset worth recruiting.
Anthropic drew a line. It said Claude will not be used for surveillance. It said Claude will not be used for autonomous weapons. The Pentagon designated it a risk for saying so. That sequence is the thing worth sitting with, because it describes something new about where AI sits in the world right now.
For most of computing history, technology was neutral in the geopolitical sense. Governments bought it, used it, regulated it, but the tools themselves did not have positions. What is happening now is different. The major AI labs are being asked to take sides, not rhetorically, but operationally. Will your model help target people? Will it automate lethal decisions? The answer to those questions is becoming a foreign policy matter.
Britain is not offering Anthropic a home because it agrees with every position Anthropic holds. It is offering a home because a company willing to refuse the US military on ethical grounds is a company that other governments can negotiate with. That is valuable in a world where AI is becoming as strategically significant as energy or communications infrastructure.
A dual listing remains, in the words of one insider, “the dream” rather than a realistic near-term scenario, particularly with Anthropic expected to IPO in the US as early as this year. The legal cloud from the Pentagon appeal is still in place, and formal commitments are unlikely before that resolves.
What is not in question is the direction of travel. The AI labs are no longer just technology companies navigating markets. They are entities with enough independent weight that governments court them, punish them, and position themselves around them the way they once did around oil companies or defense contractors.
The question of whether that power comes with accountability, and to whom, and under which legal framework, is one nobody has answered yet. Britain is not answering it either. It is just making sure it has a seat at the table when someone does.
That is what this visit in late May is really about.


